Elasticsearch, Failed to obtain node lock, is the following location writable

Elasticsearch won't start using ./bin/elasticsearch. It raises the following exception:

- ElasticsearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/home/user1/elasticsearch-1.4.4/data/elasticsearch]

I checked the permissions on the same location and the location has 777 permissions on it and is owned by user1.

ls -al /home/user1/elasticsearch-1.4.4/data/elasticsearch
drwxrwxrwx  3 user1 wheel 4096 Mar  8 13:24 .
drwxrwxrwx  3 user1 wheel 4096 Mar  8 13:00 ..
drwxrwxrwx 52 user1 wheel 4096 Mar  8 13:51 nodes

What is the problem?

Trying to run elasticsearch 1.4.4 on linux without root access.

93360 次浏览

In my situation I had wrong permissions on the ES dir folder. Setting correct owner solved it.

# change owner
chown -R elasticsearch:elasticsearch /data/elasticsearch/


# to validate
ls /data/elasticsearch/ -la
# prints
# drwxr-xr-x 2 elasticsearch elasticsearch 4096 Apr 30 14:54 CLUSTER_NAME

In my case, this error was caused by not mounting the devices used for the configured data directories using "sudo mount".

I got this same error message, but things were mounted fine and the permissions were all correctly assigned.

Turns out that I had an 'orphaned' elasticsearch process that was not being killed by the normal stop command.

I had to manually kill the process and then restarting elasticsearch worked again.

I had an orphaned Java process related to Elasticsearch. Killing it solved the lock issue.

ps aux | grep 'java'
kill -9 <PID>

You already have ES running. To prove that type:

curl 'localhost:9200/_cat/indices?v'

If you want to run another instance on the same box you can set node.max_local_storage_nodes in elasticsearch.yml to a value larger than 1.

the reason is another instance is running!
first find the id of running elastic.

ps aux | grep 'elastic'

then kill using kill -9 <PID_OF_RUNNING_ELASTIC>.
There were some answers to remove node.lock file but that didn't help since the running instance will make it again!

Try the following: 1. find the port 9200, e.g.: lsof -i:9200 This will show you which processes use the port 9200. 2. kill the pid(s), e.g. repeat kill -9 pid for each PID that the output of lsof showed in step 1 3. restart elasticsearch, e.g. elasticsearch

I had an another ElasticSearch running on the same machine.

Command to check : netstat -nlp | grep 9200 (9200 - Elastic Port) Result : tcp 0 0 :::9210 :::* LISTEN 27462/java

Kill the process by, kill -9 27462 27462 - PID of ElasticSearch instance

Start the elastic search and it may run now.

To add to the above answers there could be some other scenarios in which you can get the error.In fact I had done a update from 5.5 to 6.3 for elasticsearch.I have been using the docker compose setup with named volumes for data directories.I had to do a docker volume prune to remove the stale ones.After doing that I was no longer facing the issue.

After I upgraded the elasticsearch docker-image from version 5.6.x to 6.3.y the container would not start anymore because of the aforementioned error

Failed to obtain node lock

In my case the root-cause of the error was missing file-permissions

The data-folder used by elasticsearch was mounted from the host-system into the container (declared in the docker-compose.yml):

    volumes:
- /var/docker_folders/common/experimental-upgrade:/usr/share/elasticsearch/data

This folder could not be accessed anymore by elasticsearch for reasons I did not understand at all. After I set very permissive file-permissions to this folder and all sub-folders the container did start again.

I do not want to reproduce the command to set those very permissive access-rights on the mounted docker-folder, because it is most likely a very bad practice and a security-issue. I just wanted to share the fact that it might not be a second process of elasticsearch running, but actually just missing access-rights to the mounted folder.

Maybe someone could elaborate on the apropriate rights to set for a mounted-folder in a docker-container?

For me the error was a simple one: I created a new data directory /mnt/elkdata and changed the ownership to the elastic user. I then copied the files and forgot to change the ownership afterwards again.

After doing that and restarting the elastic node it worked.

As with many others here replying, this was caused by wrong permissions on the directory (not owned by the elasticsearch user). In our case it was caused by uninstalling Elasticsearch and reinstalling it (via yum, using the official repositories).

As of this moment, the repos do not delete the nodes directory when they are uninstalled, but they do delete the elasticsearch user/group that owns it. So then when Elasticsearch is reinstalled, a new, different elasticsearch user/group is created, leaving the old nodes directory still present, but owned by the old UID/GID. This then conflicts and causes the error.

A recursive chown as mentioned by @oleksii is the solution.

chown -R elasticsearch:elasticsearch /var/lib/elasticsearch

It directly shows it doesn't have permission to obtain a lock. So need to give permissions.

check these options

sudo chown 1000:1000 <directory you wish to mount>
# With docker
sudo chown 1000:1000 /data/elasticsearch/
OR
# With VM
sudo chown elasticsearch:elasticsearch /data/elasticsearch/


In my case the /var/lib/elasticsearch was the dir with missing permissions (CentOS 8):

error: java.io.IOException: failed to obtain lock on /var/lib/elasticsearch/nodes/0

To fix it, use:

chown -R elasticsearch:elasticsearch /var/lib/elasticsearch

If anyone is seeing this being caused by:

Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/docker/es]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?

The solution is to set max_local_storage_nodes in your elasticsearch.yml

node.max_local_storage_nodes: 2

The docs say to set this to a number greater than one on your development machine

By default, Elasticsearch is configured to prevent more than one node from sharing the same data path. To allow for more than one node (e.g., on your development machine), use the setting node.max_local_storage_nodes and set this to a positive integer larger than one.

I think that Elasticsearch needs to have a second node available so that a new instance can start. This happens to me whenever I try to restart Elasticsearch inside my Docker container. If I relaunch my container then Elasticsearch will start properly the first time without this setting.

If you are on windows then try this:

  1. Kill any java processes
  2. If the start batch is interrupted in between then rather than closing the terminal, press ctrl+c to properly stop the elastic search service before you exit the terminal.

Mostly this error occurs when you kill the process abruptly. When you kill the process, node.lock file may not be cleared. you can manually remove the node.lock file and start the process again, it should work