如何将 elasticsearch 数据从一个服务器移动到另一个服务器

如何将 Elasticsearch 数据从一个服务器移动到另一个服务器?

我将服务器 正在运行的 Elasticsearch 1.1.1放在一个具有多个索引的本地节点上。 我想要将这些数据复制到运行 Elasticsearch 1.3.4的服务器 < strong > B

到目前为止的程序

  1. 关闭两台服务器上的 ES 和
  2. Scp 将所有数据转移到新服务器上的正确数据目录。(数据似乎位于/var/lib/elasticsearch/on my debian box)
  3. 将权限和所有权更改为 elasticsearch: elasticsearch
  4. 启动新的 ES 服务器

当我使用 ES head 插件查看集群时,没有出现索引。

似乎数据没有加载。我错过了什么吗?

152137 次浏览

你可以使用 Elasticsearch 提供的 快照/还原功能。一旦设置了基于文件系统的快照存储区,就可以在集群之间移动它,并在不同的集群上进行还原

使用 弹性垃圾桶

1) yum install epel-release

2) yum install nodejs

3) yum install npm

4) npm install elasticdump

5) cd node_modules/elasticdump/bin

6)

./elasticdump \


--input=http://192.168.1.1:9200/original \


--output=http://192.168.1.2:9200/newCopy \


--type=data

所选择的答案使它听起来比实际情况稍微复杂一些,下面是您所需要的(首先在您的系统上安装 npm)。

npm install -g elasticdump
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data

You can skip the first elasticdump command for subsequent copies if the mappings remain constant.

我刚刚完成了从 AWS 到 Qbox.io 的迁移,没有出现任何问题。

详情请浏览:

Https://www.npmjs.com/package/elasticdump

包含完整帮助页(截至2016年2月) :

elasticdump: Import and export tools for elasticsearch


Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]


--input
Source location (required)
--input-index
Source index and type
(default: all, example: index/type)
--output
Destination location (required)
--output-index
Destination index and type
(default: all, example: index/type)
--limit
How many objects to move in bulk per operation
limit is approximate for file streams
(default: 100)
--debug
Display the elasticsearch commands being used
(default: false)
--type
What are we exporting?
(default: data, options: [data, mapping])
--delete
Delete documents one-by-one from the input as they are
moved.  Will not delete the source index
(default: false)
--searchBody
Preform a partial extract based on search results
(when ES is the input,
(default: '{"query": { "match_all": {} } }'))
--sourceOnly
Output only the json contained within the document _source
Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
sourceOnly: {SOURCE}
(default: false)
--all
Load/store documents from ALL indexes
(default: false)
--bulk
Leverage elasticsearch Bulk API when writing documents
(default: false)
--ignore-errors
Will continue the read/write loop on write error
(default: false)
--scrollTime
Time the nodes will hold the requested search in order.
(default: 10m)
--maxSockets
How many simultaneous HTTP requests can we process make?
(default:
5 [node <= v0.10.x] /
Infinity [node >= v0.11.x] )
--bulk-mode
The mode can be index, delete or update.
'index': Add or replace documents on the destination index.
'delete': Delete documents on destination index.
'update': Use 'doc_as_upsert' option with bulk update API to do partial update.
(default: index)
--bulk-use-output-index-name
Force use of destination index name (the actual output URL)
as destination while bulk writing to ES. Allows
leveraging Bulk API copying data inside the same
elasticsearch instance.
(default: false)
--timeout
Integer containing the number of milliseconds to wait for
a request to respond before aborting the request. Passed
directly to the request library. If used in bulk writing,
it will result in the entire batch not being written.
Mostly used when you don't care too much if you lose some
data when importing but rather have speed.
--skip
Integer containing the number of rows you wish to skip
ahead from the input transport.  When importing a large
index, things can go wrong, be it connectivity, crashes,
someone forgetting to `screen`, etc.  This allows you
to start the dump again from the last known line written
(as logged by the `offset` in the output).  Please be
advised that since no sorting is specified when the
dump is initially created, there's no real way to
guarantee that the skipped rows have already been
written/parsed.  This is more of an option for when
you want to get most data as possible in the index
without concern for losing some rows in the process,
similar to the `timeout` option.
--inputTransport
Provide a custom js file to us as the input transport
--outputTransport
Provide a custom js file to us as the output transport
--toLog
When using a custom outputTransport, should log lines
be appended to the output stream?
(default: true, except for `$`)
--help
This page


Examples:


# Copy an index from production to staging with mappings:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=data


# Backup index data to a file:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--type=data


# Backup and index to a gzip using stdout:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=$ \
| gzip > /data/my_index.json.gz


# Backup ALL indices, then use Bulk API to populate another ES cluster:
elasticdump \
--all=true \
--input=http://production-a.es.com:9200/ \
--output=/data/production.json
elasticdump \
--bulk=true \
--input=/data/production.json \
--output=http://production-b.es.com:9200/


# Backup the results of a query to a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody '{"query":{"term":{"username": "admin"}}}'


------------------------------------------------------------------------------
Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here`

如果将第二台服务器添加到集群,可以这样做:

  1. Add Server B to cluster with Server A
  2. 索引的副本增量数
  3. ES 会自动将索引复制到服务器 B
  4. 关闭服务器 A
  5. 索引的副本数递减

只有当替换的数量等于节点的数量时,这才会起作用。

我尝试在 ubuntu 上将数据从 ELK 2.4.3移动到 ELK 5.1.1

以下是步骤

$ sudo apt-get update


$ sudo apt-get install -y python-software-properties python g++ make


$ sudo add-apt-repository ppa:chris-lea/node.js


$ sudo apt-get update


$ sudo apt-get install npm


$ sudo apt-get install nodejs


$ npm install colors


$ npm install nomnom


$ npm install elasticdump

在主目录 goto

$ cd node_modules/elasticdump/

执行命令

如果您需要基本的 http auth,您可以这样使用它:

--input=http://name:password@localhost:9200/my_index

从生产中复制一个索引:

$ ./bin/elasticdump --input="http://Source:9200/Sourceindex" --output="http://username:password@Destination:9200/Destination_index"  --type=data

如果有人遇到同样的问题,当试图从 elasticsearch < 2.0转储到 > 2.0时,你需要做的是:

elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=analyzer
elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=mapping
elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=data --transform "delete doc.__source['_id']"

如果只需要将数据从一个 elasticsearch 服务器传输到另一个,也可以使用 弹性搜索-文件-传输

Steps:

  1. 在终端中打开一个目录并运行
    $ npm install elasticsearch-document-transfer.
  2. 创建一个文件 config.js
  3. config.js中添加两个 elasticsearch 服务器的连接细节
  4. Set appropriate values in options.js
  5. 在航站楼里跑
    $ node index.js

我总是成功地将索引目录/文件夹复制到新服务器并重新启动它。通过执行 GET /_cat/indices可以找到索引 id,与该 id 匹配的文件夹位于 data\nodes\0\indices中(通常在 elasticsearch 文件夹中,除非您移动了它)。

还有 _reindex选项

来自文档:

通过 Elasticsearch reindex API (版本5.x 及更高版本) ,您可以将新的 Elasticsearch Service 部署远程连接到旧的 Elasticsearch 集群。这将从旧集群中提取数据并将其索引到新集群中。重新索引实际上是从头开始重新构建索引,运行它可能需要更多的资源。

POST _reindex
{
"source": {
"remote": {
"host": "https://REMOTE_ELASTICSEARCH_ENDPOINT:PORT",
"username": "USER",
"password": "PASSWORD"
},
"index": "INDEX_NAME",
"query": {
"match_all": {}
}
},
"dest": {
"index": "INDEX_NAME"
}
}

我们可以使用 elasticdumpmultielasticdump进行备份和恢复,我们可以将数据从一个服务器/集群移动到另一个服务器/集群。

请找到我提供的详细答案。

您可以获取集群的完整状态(包括所有数据索引)的快照,并在新的集群或服务器中恢复它们(使用还原 API)。

If you don't want to use the elasticdump like a console tool. You can use next node.js 剧本

i guess that you can copy the folder data.