拒绝 Postgres on nodeJS with dockers

我正在构建一个运行在 NodeJS 上的应用程序,使用 postgreql。 我使用 SequelizeJS 作为 ORM。 为了避免使用真正的 postgres 守护进程和在我自己的设备上使用 nodejs,我使用了带有 docker-compose 的容器。

当我运行 docker-compose up的时候 它启动 pg 数据库

database system is ready to accept connections

还有 nodejs 服务器。 但是服务器无法连接到数据库。

Error: connect ECONNREFUSED 127.0.01:5432

如果我尝试不使用容器(在我的机器上使用真正的 nodejs 和 postgred)运行服务器,它就可以工作。

但是我希望它能正确地与容器一起工作。我不明白我做错了什么。

这是 docker-compose.yml文件

web:
image: node
command: npm start
ports:
- "8000:4242"
links:
- db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
DATABASE_URL: postgres://username:pgpassword@127.0.0.1:5432/mydatabase
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword

有人能帮帮我吗?

(喜欢码头的人)

114352 次浏览

Your DATABASE_URL refers to 127.0.0.1, which is the loopback adapter (more here). This means "connect to myself".

When running both applications (without using Docker) on the same host, they are both addressable on the same adapter (also known as localhost).

When running both applications in containers they are not both on localhost as before. Instead you need to point the web container to the db container's IP address on the docker0 adapter - which docker-compose sets for you.

Change:

127.0.0.1 to CONTAINER_NAME (e.g. db)

Example:

DATABASE_URL: postgres://username:pgpassword@127.0.0.1:5432/mydatabase

to

DATABASE_URL: postgres://username:pgpassword@db:5432/mydatabase

This works thanks to Docker links: the web container has a file (/etc/hosts) with a db entry pointing to the IP that the db container is on. This is the first place a system (in this case, the container) will look when trying to resolve hostnames.

If you send database vars separately. You can assign a database host.

DB_HOST=<POSTGRES_SERVICE_NAME> #in your case "db" from docker-compose file.

For further readers, if you're using Docker desktop for Mac use host.docker.internal instead of localhost or 127.0.0.1 as it's suggested in the doc. I came across same connection refused... problem. Backend api-service couldn't connect to postgres using localhost/127.0.0.1. Below is my docker-compose.yml and environment variables as a reference:

version: "2"


services:
api:
container_name: "be"
image: <image_name>:latest
ports:
- "8000:8000"
environment:
DB_HOST: host.docker.internal
DB_USER: <your_user>
DB_PASS: <your_pass>
networks:
- mynw


db:
container_name: "psql"
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_DB: <your_postgres_db_name>
POSTGRES_USER: <your_postgres_user>
POSTGRES_PASS: <your_postgres_pass>
volumes:
- ~/dbdata:/var/lib/postgresql/data
networks:
- mynw

I had two containers one called postgresdb, and another call node

I changed my node queries.js from:

const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'users',
password: 'password',
port: 5432,
})

To

const pool = new Pool({
user: 'postgres',
host: 'postgresdb',
database: 'users',
password: 'password',
port: 5432,
})

All I had to do was change the host to my container name ["postgresdb"] and that fixed this for me. I'm sure this can be done better but I just learned docker compose / node.js stuff in the last 2 days.

If none of the other solutions worked for you, consider manual wrapping of PgPool.connect() with retry upon having ECONNREFUSED:

const pgPool = new Pool(pgConfig);
const pgPoolWrapper = {
async connect() {
for (let nRetry = 1; ; nRetry++) {
try {
const client = await pgPool.connect();
if (nRetry > 1) {
console.info('Now successfully connected to Postgres');
}
return client;
} catch (e) {
if (e.toString().includes('ECONNREFUSED') && nRetry < 5) {
console.info('ECONNREFUSED connecting to Postgres, ' +
'maybe container is not ready yet, will retry ' + nRetry);
// Wait 1 second
await new Promise(resolve => setTimeout(resolve, 1000));
} else {
throw e;
}
}
}
}
};

(See this issue in node-postgres for tracking.)

I am here with a tiny modification about handle this.

As Andy say in him response.

  • "you need to point the web container to the db container's"

And taking in consideration the official documentation about docker-compose link's

  • "Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name."

Because of that, you can keep your docker_compose.yml in this way:

docker_compose.yml

version: "3"
services:
web:
image: node
command: npm start
ports:
- "8000:4242"
# links:
#   - db
working_dir: /src
environment:
SEQ_DB: mydatabase
SEQ_USER: username
SEQ_PW: pgpassword
PORT: 4242
# DATABASE_URL: postgres://username:pgpassword@127.0.0.1:5432/mydatabase
DATABASE_URL: "postgres://username:pgpassword@db:5432/mydatabase"
volumes:
- ./:/src
db:
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: pgpassword

But it is a kinda cool way to be verbose while we are coding. So, your approach it is nice.

As mentioned here.

Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.

It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.

Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.

So DATABASE_URL should be postgres://username:pgpassword@db:5432/mydatabase