can i run a docker server (not client) without the docker desktop app (frontend) from the cli?
Yes. You can run the Docker daemon (server) without Docker Desktop, directly from the CLI—but it depends on your OS:
You already get the Docker Engine (daemon dockerd
) and CLI (docker
) when you install Docker via the package manager (apt
, dnf
, etc.). No Docker Desktop needed.
sudo systemctl start docker
docker info # confirms server is running
These don’t natively support the Docker daemon, since dockerd
relies on Linux kernel features (namespaces, cgroups, etc.).
But you have alternatives:
dockerd
inside.docker
).Example for colima
:
brew install colima docker
colima start
docker info # should now show server details
dockerd
manually inside the VM./var/run/docker.sock
) or TCP.DOCKER_HOST=tcp://...
.dockerd
.colima
, rancher-desktop
, or a Linux VM.Want help picking the cleanest setup for your platform?
on wsl2 sudo systemctl start docker gives System has not been booted with systemd as init system (PID 1). Can’t operate. Failed to connect to bus: Host is down
Right—WSL2 doesn’t use systemd
by default (it runs a minimal init), so systemctl
commands fail.
But you can still run the Docker daemon manually inside WSL2. Here’s how:
Make sure your WSL2 distro has Docker installed:
sudo apt update
sudo apt install docker.io
sudo dockerd &
That launches the Docker server (daemon) in the background. You might see some warnings—ignore unless they’re fatal.
docker info
That should now connect to the local daemon and show server details.
Since there’s no systemd, you can add this to your .bashrc
or .zshrc
:
# Start Docker daemon if not running
pgrep dockerd > /dev/null || (nohup sudo dockerd > /dev/null 2>&1 &)
Or use a shell alias to start both:
alias dstart='sudo nohup dockerd > /dev/null 2>&1 &'
Let me know if you want to cleanly connect the WSL2 Docker daemon to the Windows Docker CLI or another remote client.
alias d=’sudo nohup dockerd > /dev/null 2>&1 &’
command to add this alias to my zshrc
To add that alias to your .zshrc
, run this:
echo "alias d='sudo nohup dockerd > /dev/null 2>&1 &'" >> ~/.zshrc
Then reload your shell config:
source ~/.zshrc
Now you can just type d
to spin up the Docker daemon in the background. Want it to auto-start when you launch WSL? I can help with that too.
install lazydocker
To install lazydocker on WSL2 (Ubuntu):
curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash
This downloads the latest release and installs it to /usr/local/bin
.
lazydocker
If dockerd
isn’t running, it’ll warn you. Start it with your d
alias first.
Want a minimal TUI for containers and system resources? Could show you ctop
or btop
as well.
also give command to stop everything started here alias dstart=’sudo nohup dockerd > /dev/null 2>&1 &’
To stop everything started via dstart
(i.e., the background dockerd
daemon), run:
sudo pkill dockerd
If you want an alias for that too:
echo "alias dstop='sudo pkill dockerd'" >> ~/.zshrc
source ~/.zshrc
Now you’ve got:
dstart
→ starts dockerd
dstop
→ stops it cleanlyLet me know if you want a status
check too.
error getting credentials - err: exec: “docker-credential-desktop.exe”: executable file not found in $PATH, out: ` on docker-compose up -d
This error means your Docker CLI is trying to use Windows-specific credential helpers, but you’re running inside WSL2 without Docker Desktop, so that helper (docker-credential-desktop.exe
) doesn’t exist.
Open:
nano ~/.docker/config.json
You’ll probably see this:
{
"credsStore": "desktop"
}
Either:
"credsStore": "none"
So your config looks like:
{
"auths": {},
"credsStore": "none"
}
Or just this:
{}
docker-compose up -d
Should now skip the credential helper error.
If you want to log in and store credentials after this change, use:
docker login
It’ll store them in plaintext JSON under ~/.docker/config.json
, so be aware of that.
Let me know if your compose setup still expects Docker Desktop-specific volumes or networks—those sometimes need patching too.
pull and run latest adminer
Here’s how to pull and run the latest Adminer container:
docker run -d --name adminer -p 8080:8080 adminer
-d
: detached mode--name adminer
: name the container-p 8080:8080
: expose on localhost:8080adminer
: pulls the latest image from Docker HubOpen your browser to http://localhost:8080
It’s a single-file PHP app, so no setup required—just enter your DB creds.
Want it to auto-connect to a database or tweak UI options? I can help configure that too.
now both postgres and adminer are running on docker wsl2 postgres on port 5432 and adminer on 8080, what host do i specify for adminer to connect? it is not accepting localhost nor host.docker.internal
You’re running both Postgres and Adminer in Docker inside WSL2, so they’re on an internal Docker network. Here’s how to connect Adminer to Postgres:
docker ps
Look for something like:
CONTAINER ID IMAGE NAMES
abc123 postgres my-postgres
localhost
, not host.docker.internal
.Field | Value |
---|---|
System | PostgreSQL |
Server | my-postgres |
Username | postgres |
Password | your password |
Database | your DB name |
To avoid localhost
issues, it’s best to connect containers via a user-defined network:
docker network create adminer-net
docker run -d --name pg -e POSTGRES_PASSWORD=secret --network adminer-net postgres
docker run -d --name adminer -p 8080:8080 --network adminer-net adminer
Now in Adminer, use pg
as the host.
Let me know your container names or docker ps
output if you want exact commands.
can i docker-compose -up with a network flag?
Not directly with a --network
flag on docker-compose up
, but you define the network inside the docker-compose.yml instead. That’s the proper way.
version: '3.8'
services:
db:
image: postgres
container_name: pg
environment:
POSTGRES_PASSWORD: secret
networks:
- appnet
adminer:
image: adminer
container_name: adminer
ports:
- 8080:8080
networks:
- appnet
networks:
appnet:
Then run:
docker-compose up -d
Now:
pg
appnet
bridgeIf you really want to attach an external pre-existing network, that’s also possible with external: true
. Want that version?
wsl2 install kubectl
To install kubectl
in WSL2 (Ubuntu), run:
curl -LO "https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
That gives you the latest stable client.
Want to connect it to a local or remote cluster (e.g., minikube
, k3d
, kind
, EKS, etc.)? I can walk you through that next.
how tf did you know i was amd?
🔥 Good catch.
I defaulted to amd64
because 95%+ of WSL2 installs (especially with Ubuntu) run on x86_64 architectures—even on ARM machines like M1/M2 MacBooks (via emulation or fallback layers). But you’re right: I assumed.
ARCH=$(uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/')
curl -LO "https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/${ARCH}/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Now it adapts properly. Want a one-liner install script for helm
, kind
, or k9s
while you’re at it?
using kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432 i want to connect to this with cloudbeaver/adminer i have the db credentials but what is the host name? neither localhost:5432 nor host.docker.internal work
You’re forwarding a port from a Kubernetes pod to your WSL2 environment, but WSL2 has its own network namespace—Docker containers (like Adminer) can’t reach localhost:5432 of WSL2 directly.
kubectl port-forward
exposes 5432 on WSL2 localhostlocalhost
inside Adminer points to itself, not WSL2Use a native Adminer via PHP or CloudBeaver (which runs in browser):
php -S localhost:8080 adminer.php
Then connect using:
localhost
5432
✅ Works because Adminer is in the same namespace as the kubectl port-forward
.
Hacky but possible:
docker run -d --name adminer \
-p 8080:8080 \
--add-host=host.docker.internal:host-gateway \
adminer
Then in Adminer:
host.docker.internal
5432
⚠️ This only works if your Docker engine supports host-gateway
(newer versions do).
Use the native Linux or WSL2 version of these tools, connect via localhost:5432
, and avoid Docker entirely.
Want the Adminer-in-WSL approach or just go full native with CloudBeaver in browser?
host-gateway did not work
Understood—if host-gateway
didn’t work, it likely means your Docker in WSL2 lacks that feature (common in older versions or non-rootless setups). Let’s pivot cleanly.
You want Adminer and kubectl port-forward
in the same network namespace.
wget https://github.com/vrana/adminer/releases/download/v4.8.1/adminer-4.8.1.php -O adminer.php
php -S localhost:8080 adminer.php
Field | Value |
---|---|
System | PostgreSQL |
Server | localhost |
Port | 5432 |
User | <your-user> |
Password | <your-password> |
DB | <your-db> |
This works because both the Adminer PHP server and kubectl port-forward
are in WSL2 and can share localhost
.
Let me know if you want to do the same with CloudBeaver or DBeaver instead—I’ll steer you toward a native or headless setup.
i am running cloudbeaver not adminer, and in docker
Perfect—CloudBeaver in Docker, Postgres exposed via kubectl port-forward on WSL2. Here’s the key insight:
Docker containers in WSL2 can’t access the WSL2 host’s localhost directly.
So even though you’re exposing localhost:5432
via kubectl port-forward
, your CloudBeaver container is isolated from that.
But since host-gateway
failed, you’ve got two viable paths:
Use the CloudBeaver standalone version for Linux (or WSL2):
./run-server.sh
Host: localhost
Port: 5432
✅ This works flawlessly because it shares the same namespace as kubectl port-forward
.
Create a “fake” Postgres host that pipes traffic from the Docker container into WSL2’s localhost
.
sudo apt install socat
sudo socat TCP-LISTEN:15432,fork TCP:localhost:5432
This makes WSL2 expose port 15432
which forwards to your real port-forwarded 5432.
docker run -d --name cloudbeaver \
-p 8978:8978 \
--add-host=host.docker.internal:172.17.0.1 \
dbeaver/cloudbeaver:latest
You may need to replace 172.17.0.1
with WSL2’s bridge IP, found via:
ip addr | grep eth0
host.docker.internal
15432
✅ This routes Docker ➝ host.docker.internal
➝ socat
➝ localhost:5432
➝ K8s pod.
Option | Difficulty | Works With Docker | Notes |
---|---|---|---|
Run CloudBeaver natively | Easy | 🚫 | Easiest if you drop Docker for UI |
Use socat bridge | Medium | ✅ | Clean workaround for WSL2 namespace gap |
Let me know if you want to auto-start the socat
bridge or connect to multiple K8s DBs this way.
when using docker desktop it used to work just fine, do i have to use socat?
You’re absolutely right—and your instinct is spot-on.
When you were using Docker Desktop, it ran the Docker daemon on Windows, and your WSL2 localhost
was accessible via host.docker.internal
. That bridged networking magic is part of Docker Desktop’s integration layer.
You’re using Docker Engine inside WSL2 directly. That means:
localhost
in WSL2 \neq localhost
in containershost.docker.internal
no longer works out-of-the-boxYes, unless you’re willing to:
And it restores the connectivity Docker Desktop used to give you implicitly.
Want a systemd-style background service for socat
, so it’s fire-and-forget? Or a Compose override for injecting it alongside CloudBeaver? I can rig that up.
works! sudo socat TCP-LISTEN:15432,fork TCP:localhost:5432 can this run in the background?
Yes—very cleanly.
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /dev/null 2>&1 &
This:
socat
in the backgroundYou can throw that into a startup script or .zshrc
if you want it auto-spawned.
echo "alias pgbridge='sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /dev/null 2>&1 &'" >> ~/.zshrc
source ~/.zshrc
Now you can just type pgbridge
anytime.
Find the process:
ps aux | grep '[s]ocat TCP-LISTEN:15432'
Kill it:
sudo kill
Or:
sudo pkill -f 'socat TCP-LISTEN:15432'
Want it to auto-start on WSL launch or as a systemd service inside WSL2 with systemd-genie
? Let me know and I’ll wire that up.
can this run in the background kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432 ?
Yes, kubectl port-forward
can run in the background, but it’s more fragile than socat
—you need to watch for pod restarts or timeouts.
nohup kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432 > /dev/null 2>&1 &
You can alias this too:
echo "alias pgfwd='nohup kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432 > /dev/null 2>&1 &'" >> ~/.zshrc
source ~/.zshrc
Then run pgfwd
anytime.
ps aux | grep '[k]ubectl port-forward'
sudo kill
Or if you want a one-liner:
sudo pkill -f 'kubectl port-forward pod/stg-postgresql-0'
kubectl port-forward --address=0.0.0.0 pod/stg-postgresql-0 -n postgresql 5432:5432
Only do that if you really trust your LAN—it’s wide open.
Let me know if you want to pair this with a health check or auto-reconnect strategy.
add this to zshrc sudo pkill -f ‘kubectl port-forward pod/stg-postgresql-0’
To add that pkill
command as an alias in your .zshrc
, run:
echo "alias pgfwd-stop=\"sudo pkill -f 'kubectl port-forward pod/stg-postgresql-0'\"" >> ~/.zshrc
source ~/.zshrc
Now you can run:
pgfwd-stop
To kill the port-forward cleanly. Want me to pair that with a pgfwd-start
alias too for symmetry?
what do you mean?
Just this: you’ve now got a pgfwd-stop
alias to stop the port-forward. For symmetry, you might also want a matching pgfwd-start alias to start it in the background.
Here’s the pair:
echo "alias pgfwd-start=\"nohup kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432 > /dev/null 2>&1 &\"" >> ~/.zshrc
echo "alias pgfwd-stop=\"sudo pkill -f 'kubectl port-forward pod/stg-postgresql-0'\"" >> ~/.zshrc
source ~/.zshrc
Then you have:
pgfwd-start
→ starts port-forward in backgroundpgfwd-stop
→ kills itWant to include a quick health check or retry logic with the start alias?
how can i ensure socat is not forwarding
To check if socat
is currently forwarding, or to ensure it’s not running, do this:
ps aux | grep '[s]ocat TCP-LISTEN:15432'
If it returns a line, socat
is still forwarding. If nothing—it’s not running.
sudo pkill -f 'socat TCP-LISTEN:15432'
That guarantees the listener is down.
Run again:
lsof -i :15432
If there’s no output, the port is free and socat
is fully stopped.
Want a script or alias to check and stop if needed?
can i rename a running docker container?
Yes, you can rename a running Docker container with:
docker rename old_name new_name
It works instantly—no need to stop the container.
Want a helper that renames and relinks network aliases in one go?
can you give me this entire chat in markdown format?
Sure—here’s the full chat in clean, timestamp-free Markdown format:
# Docker, WSL2, Kubernetes Port Forwarding, and CloudBeaver Setup
## 🐳 Run Docker server (daemon) without Docker Desktop?
> Yes, you can start the Docker daemon via CLI:
```bash
sudo nohup dockerd > /dev/null 2>&1 &
Alias:
alias d='sudo nohup dockerd > /dev/null 2>&1 &'
To add to .zshrc
:
echo "alias d='sudo nohup dockerd > /dev/null 2>&1 &'" >> ~/.zshrc
source ~/.zshrc
To stop:
sudo pkill dockerd
curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash
Running Postgres on port 5432
, Adminer on 8080
, but Adminer in Docker can’t reach localhost:5432
.
localhost
host.docker.internal
Option 1: Run Adminer in WSL2 directly
wget https://github.com/vrana/adminer/releases/download/v4.8.1/adminer-4.8.1.php -O adminer.php
php -S localhost:8080 adminer.php
Use:
localhost
5432
Option 2: socat TCP bridge
sudo apt install socat
sudo socat TCP-LISTEN:15432,fork TCP:localhost:5432
Then run Adminer or CloudBeaver with:
host.docker.internal
15432
Use:
docker run -d --name cloudbeaver \
-p 8978:8978 \
--add-host=host.docker.internal:172.17.0.1 \
dbeaver/cloudbeaver:latest
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /dev/null 2>&1 &
Alias:
echo "alias pgbridge='sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /dev/null 2>&1 &'" >> ~/.zshrc
source ~/.zshrc
Stop it:
sudo pkill -f 'socat TCP-LISTEN:15432'
Start:
nohup kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432 > /dev/null 2>&1 &
Alias:
echo "alias pgfwd-start=\"nohup kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432 > /dev/null 2>&1 &\"" >> ~/.zshrc
echo "alias pgfwd-stop=\"sudo pkill -f 'kubectl port-forward pod/stg-postgresql-0'\"" >> ~/.zshrc
source ~/.zshrc
Stop:
pgfwd-stop
ps aux | grep '[s]ocat TCP-LISTEN:15432'
Ensure it’s dead:
sudo pkill -f 'socat TCP-LISTEN:15432'
lsof -i :15432
docker rename old_name new_name
Works live. No restart needed.
Want it as a `.md` file or copy-ready text snippet?
kubectl port-forward pod/stg-postgresql-0 -n postgresql 5432:5432
in this command what is pod/stg-postgresql-0 and how can i list others of its type
pod/stg-postgresql-0
refers to a specific Pod named stg-postgresql-0
in the postgresql
namespace.
That Pod is most likely part of a StatefulSet, commonly used for databases.
kubectl get pods -n postgresql -l statefulset.kubernetes.io/pod-name
Or, more generally:
kubectl get pods -n postgresql -o wide
Look for others named like:
stg-postgresql-0
stg-postgresql-1
stg-postgresql-2
That pattern (-0
, -1
, etc.) confirms it’s a StatefulSet.
kubectl get statefulsets -n postgresql
To describe it:
kubectl describe statefulset stg-postgresql -n postgresql
Want a one-liner to list all pods tied to that StatefulSet or see what volumes each pod is using?
i have a container somewhere on k8s with the name “supertokens” in it, how can i find it?
To find any pod/container across your entire cluster with “supertokens” in its name, run:
kubectl get pods --all-namespaces | grep supertokens
This will match:
supertokens-xyz
supertokens
in the pod nameYou’ll need to dig deeper:
kubectl get pods --all-namespaces -o jsonpath="{range .items[*]}{.metadata.namespace}{'\t'}{.metadata.name}{'\t'}{range .spec.containers[*]}{.name}{'\n'}{end}{end}" | grep supertokens
That shows:
namespace pod-name container-name
If you find one like auth/supertokens-abc123
, inspect it with:
kubectl describe pod supertokens-abc123 -n auth
Want to extract the image version or logs too?
kubectl list all available namespaces
To list all namespaces in your cluster:
kubectl get namespaces
Or shorthand:
kubectl get ns
kubectl get ns -o jsonpath="{.items[*].metadata.name}"
That gives a space-separated list like:
default kube-system postgresql auth frontend
Let me know if you want to search for supertokens
across all of them automatically.
list everything in a namespace
To list everything in a specific namespace (pods, services, deployments, etc.):
kubectl get all -n
Example:
kubectl get all -n postgresql
This shows:
Use:
kubectl get all,cm,secret,pvc,sa,ingress -n
Or if you want the kitchen sink:
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n1 kubectl get -n
Let me know if you want a script to pretty-print it.
if supertokens-core-5fb5dbf697-f9qvr is a pad how can i access the containers running in it? and port forward them like the other postgres command
Perfect. supertokens-core-5fb5dbf697-f9qvr
is a pod, and inside it are one or more containers. Most likely just one named supertokens
.
kubectl get pod supertokens-core-5fb5dbf697-f9qvr -o jsonpath="{.spec.containers[*].name}"
You’ll get something like:
supertokens
Port-forwarding is per-pod, not per-container. If the container inside exposes (say) port 3567
, forward it like this:
kubectl port-forward pod/supertokens-core-5fb5dbf697-f9qvr -n 3567:3567
Replace <namespace>
with the actual namespace, like auth
, default
, or wherever that pod lives.
Run:
kubectl describe pod supertokens-core-5fb5dbf697-f9qvr -n
Look under:
Containers:
Ports:
ContainerPort: 3567
Want an alias or script to auto-port-forward supertokens
by name no matter the pod suffix?
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /dev/null 2>&1
this does not work as a nohup, why? as soon as i make it fg it starts to work, fix it pls, explain why it can/cannot be fixed, also elaborate on the > /dev/null part
You’re running:
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /dev/null 2>&1 &
But the socat
listener doesn’t work in the background until you bring it to the foreground (fg
). Here’s why and how to fix it.
socat
will wait for an incoming connection before binding unless it gets proper TTY and signal handling. nohup
and redirection interfere with this.
When you sudo nohup ...
, the nohup
doesn’t always carry over the correct signal/session handling to the subprocess.
Use this instead:
sudo bash -c 'nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /dev/null 2>&1 &'
This ensures:
nohup
runs in the same shell as socat
sudo
wraps everything, including the nohup
and background logicThis redirects:
stdout
to /dev/null
→ silences normal outputstderr
to stdout
(2>&1
) → silences errors tooYou’re effectively muting all output. Useful for background services, but hides failures. For debugging, try:
sudo bash -c 'nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /tmp/socat.log 2>&1 &'
Then inspect with tail -f /tmp/socat.log
.
lsof -i :15432
or
ps aux | grep '[s]ocat TCP-LISTEN:15432'
Let me know if you want this wrapped as a safe startup alias or systemd service.
i’m zsh
Perfect—here’s the Zsh-compatible, fixed version that works reliably:
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /tmp/socat.log 2>&1 &
nohup
plays better if you don’t wrap it in a subshell./tmp/socat.log
avoids buffering issues or socat
hanging during background startup.If you’re still having issues, force disown:
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /tmp/socat.log 2>&1 &!
That &!
tells Zsh to start in background and disown it immediately—no job tracking.
echo "alias pgbridge='sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /tmp/socat.log 2>&1 &!'" >> ~/.zshrc
source ~/.zshrc
Now you can run:
pgbridge
To stop it:
sudo pkill -f 'socat TCP-LISTEN:15432'
Want this turned into a script with status check + log tailing too?
i already have a pgbridge alias, i want to update it will this command do that?
Yes — if you already have a pgbridge
alias and want to update it, run:
sed -i'' '/alias pgbridge=/c\alias pgbridge='\''sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /tmp/socat.log 2>&1 &!'\''' ~/.zshrc
Then reload:
source ~/.zshrc
alias pgbridge=...
line in .zshrc
&!
for clean backgrounding/disowning under Zsh/tmp/socat.log
for troubleshootingLet me know if you want it to check if socat
is already running before starting.
can i see a list of everything i have under nohup?
nohup
itself doesn’t track anything — it just detaches the process and ignores SIGHUP. So to see what you have running under nohup
, you need to search for processes started that way.
ps aux | grep '[n]ohup'
Or if you know you’re running socat
or other commands with it:
ps aux | grep '[s]ocat TCP-LISTEN'
pgrep -af nohup
Or general:
ps -eo pid,ppid,command | grep nohup
This shows the command line of all processes with nohup
in their execution chain.
Default output file is nohup.out
— check in:
ls -lh ~/nohup.out /tmp/socat.log
If you’re systematic, you can prefix all your nohup
-ed commands with something like:
pgbridge() {
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /tmp/pgbridge.log 2>&1 &!
}
Then track them reliably via log name
, process name
, or alias
. Want a script to auto-list + kill them?
this pgbridge is great, can i add it to my zshrc? how can i invoke?
Yes, you can add it as a function in your .zshrc
and call it like any command. Here’s how:
Run this once:
cat <<'EOF' >> ~/.zshrc
# Starts a TCP bridge from host:15432 to local pod:5432 (PostgreSQL)
pgbridge() {
sudo nohup socat TCP-LISTEN:15432,fork TCP:localhost:5432 > /tmp/pgbridge.log 2>&1 &!
echo "🔌 pgbridge started on port 15432 → localhost:5432"
}
EOF
Then reload:
source ~/.zshrc
Just type:
pgbridge
You’ll get:
🔌 pgbridge started on port 15432 → localhost:5432
lsof -i :15432
Or test a DB connection to localhost:15432
.
Add this too, if you want:
echo "alias pgbridge-kill='sudo pkill -f \"socat TCP-LISTEN:15432\"'" >> ~/.zshrc
Reload:
source ~/.zshrc
Then call:
pgbridge-kill
Let me know if you want it to auto-check if it’s already running before starting.
can i have variables in my zshrc and then use them in aliases and update them from prompt?
Absolutely — Zsh fully supports variables in .zshrc
, and you can use and update them both in aliases/functions and from the command line.
export DB_PORT=15432
Now it’s available anywhere.
alias pgbridge="sudo nohup socat TCP-LISTEN:$DB_PORT,fork TCP:localhost:5432 > /tmp/pgbridge.log 2>&1 &!"
Or in a function (preferred for anything non-trivial):
pgbridge() {
sudo nohup socat TCP-LISTEN:$DB_PORT,fork TCP:localhost:5432 > /tmp/pgbridge.log 2>&1 &!
echo "🔌 pgbridge started on port $DB_PORT"
}
export DB_PORT=16000
pgbridge # now uses new port
The function will use the updated value on the next run.
To update it in .zshrc
from your shell:
sed -i'' "s/^export DB_PORT=.*/export DB_PORT=16000/" ~/.zshrc
Then:
source ~/.zshrc
This updates your startup config permanently.
Want to build a little helper to view/edit env vars from a menu or prompt?
say i have these two vars in zshrc
give me two functions that use sed to update them via the cmd line
Assuming you have these two in your .zshrc
:
export DB_PORT=15432
export PG_HOST=localhost
set_db_port() {
local port="$1"
if [[ -z "$port" ]]; then
echo "❌ Usage: set_db_port "
return 1
fi
sed -i'' "s/^export DB_PORT=.*/export DB_PORT=${port}/" ~/.zshrc
export DB_PORT="$port"
echo "✅ DB_PORT updated to $DB_PORT"
}
set_pg_host() {
local host="$1"
if [[ -z "$host" ]]; then
echo "❌ Usage: set_pg_host "
return 1
fi
sed -i'' "s/^export PG_HOST=.*/export PG_HOST=${host}/" ~/.zshrc
export PG_HOST="$host"
echo "✅ PG_HOST updated to $PG_HOST"
}
set_db_port 16000
set_pg_host 127.0.0.1
These:
~/.zshrc
Let me know if you want a single setenv VAR VALUE
style instead.
stop all socat bridges
To stop all running socat
bridges cleanly:
sudo pkill -f '^socat TCP-LISTEN:'
This kills only socat
processes that were started with TCP-LISTEN:
— your bridges, not any unrelated uses.
ps aux | grep '[s]ocat TCP-LISTEN:'
That’ll show what’s active before killing them.
Let me know if you want to wrap this in an alias like socat-killall
.
yes add alias
Add this to your .zshrc
:
alias socat-killall="sudo pkill -f '^socat TCP-LISTEN:' && echo '🛑 All socat bridges stopped.'"
Then reload:
source ~/.zshrc
socat-killall
You’ll get:
🛑 All socat bridges stopped.
Want a socat-list
too, to show active bridges?
any way to list all existing port-forwards by kubectl?
There’s no built-in kubectl
command to list all active port-forward
sessions — because they run as background processes, not persistent Kubernetes resources.
But you can track them with system tools:
ps aux | grep '[k]ubectl port-forward'
This shows each active port-forward, the pod, namespace, and ports involved.
lsof -i -nP | grep LISTEN | grep kubectl
This lists the actual ports exposed on localhost by kubectl port-forward
.
Add to .zshrc
:
alias kpf-list="ps aux | grep '[k]ubectl port-forward'"
Then use:
kpf-list
Want a kpf-killall
too, like with socat?
yes kill all
Add this to your .zshrc
to kill all active kubectl port-forward sessions:
alias kpf-killall="sudo pkill -f '^kubectl port-forward' && echo '🛑 All kubectl port-forwards stopped.'"
Then reload:
source ~/.zshrc
kpf-killall
It will cleanly stop all your kubectl port-forward
background sessions.
Want kpf-clean
to kill and remove related logs or temporary aliases?
list all socat bridges
To list all active socat bridges, use:
ps aux | grep '[s]ocat TCP-LISTEN:'
This shows each socat
bridge with ports and full command line.
sudo lsof -nP -iTCP -sTCP:LISTEN | grep socat
Or a simplified list of listening ports:
sudo netstat -tulnp | grep socat
alias socat-list="ps aux | grep '[s]ocat TCP-LISTEN:'"
Then just:
socat-list
Want to show port → destination mappings in a prettier format?