Solr Index Replication

Solr是一个独立的企业级搜索应用服务器,它对外提供类似于Web-service的API接口。用户可以通过http请求,向搜索引擎服务器提交一定格式的XML文件,生成索引;也可以通过Http Get操作提出查找请求,并得到XML格式的返回结果。
Solr是一个高性能,采用Java5开发,基于Lucene的全文搜索服务器。同时对其进行了扩展,提供了比Lucene更为丰富的查询语言,同时实现了可配置、可扩展并对查询性能进行了优化,并且提供了一个完善的功能管理界面,是一款非常优秀的全文搜索引擎。

来源:https://cwiki.apache.org/confluence/display/solr/Index+Replication

The Lucene index format has changed with Solr 4. As a result, once you upgrade, previous versions of Solr will no longer be able to read the rest of your indices. In a master/slave configuration, all searchers/slaves should be upgraded before the master. If the master is updated first, the older searchers will not be able to read the new index format.

Index Replication distributes complete copies of a master index to one or more slave servers. The master server continues to manage updates to the index. All querying is handled by the slaves. This division of labor enables Solr to scale to provide adequate responsiveness to queries against large search volumes.

The figure below shows a Solr configuration using index replication. The master server's index is replicated on the slaves.


A Solr index can be replicated across multiple slave servers, which then process requests.

Index Replication in Solr

Solr includes a Java implementation of index replication that works over HTTP.

For information on the ssh/rsync based replication, see Index Replication using ssh and rsync.

The Java-based implementation of index replication offers these benefits:

  • Replication without requiring external scripts
  • The configuration affecting replication is controlled by a single file, solrconfig.xml
  • Supports the replication of configuration files as well as index files
  • Works across platforms with same configuration
  • No reliance on OS-dependent hard links
  • Tightly integrated with Solr; an admin page offers fine-grained control of each aspect of replication
  • The Java-based replication feature is implemented as a RequestHandler. Configuring replication is therefore similar to any normal RequestHandler.

Replication Terminology

The table below defines the key terms associated with Solr replication.

TermDefinition
CollectionA Lucene collection is a directory of files. These files make up the indexed and returnable data of a Solr search repository.
DistributionThe copying of a collection from the master server to all slaves. The distribution process takes advantage of Lucene's index file structure.
Inserts and DeletesAs inserts and deletes occur in the collection, the directory remains unchanged. Documents are always inserted into newly created files. Documents that are deleted are not removed from the files. They are flagged in the file, deletable, and are not removed from the files until the collection is optimized.
Master and SlaveThe Solr distribution model uses the master/slave model. The master is the service which receives all updates initially and keeps everything organized. Solr uses a single update master server coupled with multiple query slave servers. All changes (such as inserts, updates, deletes, etc.) are made against the single master server. Changes made on the master are distributed to all the slave servers which service all query requests from the clients.
UpdateAn update is a single change request against a single Solr instance. It may be a request to delete a document, add a new document, change a document, delete all documents matching a query, etc. Updates are handled synchronously within an individual Solr instance.
OptimizationA process that compacts the index and merges segments in order to improve query performance. New secondary segment(s) are created to contain documents inserted into the collection after it has been optimized. A Lucene collection must be optimized periodically to maintain satisfactory query performance. Optimization is run on the master server only. An optimized index will give you a performance gain at query time of at least 10%. This gain may be more on an index that has become fragmented over a period of time with many updates and no optimizations. Optimizations require a much longer time than does the distribution of an optimized collection to all slaves.
SegmentsThe number of files in a collection.
mergeFactorA parameter that controls the number of files (segments) in a collection. For example, when mergeFactor is set to 3, Solr will fill one segment with documents until the limit maxBufferedDocs is met, then it will start a new segment. When the number of segments specified by mergeFactor is reached--in this example, 3--then Solr will merge all the segments into a single index file, then begin writing new documents to a new segment.
SnapshotA directory containing hard links to the data files. Snapshots are distributed from the master server when the slaves pull them, "smartcopying" the snapshot directory that contains the hard links to the most recent collection data files.

Configuring the Replication RequestHandler on a Master Server

Before running a replication, you should set the following parameters on initialization of the handler:

NameDescription
replicateAfterString specifying action after which replication should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. If you use "startup", you need to have a "commit" and/or "optimize" entry also if you want to trigger replication on future commits or optimizes.
backupAfterString specifying action after which a backup should occur. Valid values are commit, optimize, or startup. There can be multiple values for this parameter. It is not required for replication, it just makes a backup.
maxNumberOfBackupsInteger specifying how many backups to keep. This can be used to delete all but the most recent N backups.
confFilesThe configuration files to replicate, separated by a comma.
commitReserveDurationIf your commits are very frequent and your network is slow, you can tweak this parameter to increase the amount of time taken to download 5Mb from the master to a slave. The default is 10 seconds.

The example below shows how to configure the Replication RequestHandler on a master server.

<requestHandler name="/replication" class="solr.ReplicationHandler" >
<lst name="master">
<str name="replicateAfter">optimize</str>
<str name="backupAfter">optimize</str>
<str name="confFiles">schema.xml,stopwords.txt,elevate.xml</str>
<str name="commitReserveDuration">00:00:10</str>
</lst>
<int name="maxNumberOfBackups">2</int>
</requestHandler>

Replicating solrconfig.xml

In the configuration file on the master server, include a line like the following:

<str name="confFiles">solrconfig_slave.xml:solrconfig.xml,x.xml,y.xml</str>

This ensures that the local configuration solrconfig_slave.xml will be saved as solrconfig.xml on the slave. All other files will be saved with their original names.

On the master server, the file name of the slave configuration file can be anything, as long as the name is correctly identified in the confFiles string; then it will be saved as whatever file name appears after the colon ':'.

Configuring the Replication RequestHandler on a Slave Server

The code below shows how to configure a ReplicationHandler on a slave.

<requestHandler name="/replication" class="solr.ReplicationHandler" >
<lst name="slave">
<!--fully qualified url for the replication handler of master. It is possible to pass on this as
a request param for the fetchindex command-->
<str name="masterUrl">http://remote_host:port/solr/corename/replication</str>
<!--Interval in which the slave should poll master .Format is HH:mm:ss . If this is absent slave does not
poll automatically.
But a fetchindex can be triggered from the admin or the http API -->
<str name="pollInterval">00:00:20</str>
<!-- THE FOLLOWING PARAMETERS ARE USUALLY NOT REQUIRED-->
<!--to use compression while transferring the index files. The possible values are internal|external
if the value is 'external' make sure that your master Solr has the settings to honor the
accept-encoding header.
See here for details: http://wiki.apache.org/solr/SolrHttpCompression
If it is 'internal' everything will be taken care of automatically.
USE THIS ONLY IF YOUR BANDWIDTH IS LOW . THIS CAN ACTUALLY SLOWDOWN REPLICATION IN A LAN-->
<str name="compression">internal</str>
<!--The following values are used when the slave connects to the master to download the index files.
Default values implicitly set as 5000ms and 10000ms respectively. The user DOES NOT need to specify
these unless the bandwidth is extremely low or if there is an extremely high latency-->
<str name="httpConnTimeout">5000</str>
<str name="httpReadTimeout">10000</str>
<!-- If HTTP Basic authentication is enabled on the master, then the slave can be
configured with the following -->
<str name="httpBasicAuthUser">username</str>
<str name="httpBasicAuthPassword">password</str>
</lst>
</requestHandler>
If you are not using cores, then you simply omit the corename parameter above in the masterUrl. To ensure that the URL is correct, just hit the URL with a browser. You must get a status OK response.

Setting Up a Repeater with the ReplicationHandler

A master may be able to serve only so many slaves without affecting performance. Some organizations have deployed slave servers across multiple data centers. If each slave downloads the index from a remote data center, the resulting download may consume too much network bandwidth. To avoid performance degradation in cases like this, you can configure one or more slaves as repeaters. A repeater is simply a node that acts as both a master and a slave.

  • To configure a server as a repeater, the definition of the Replication requestHandler in the solrconfig.xml file must include file lists of use for both masters and slaves.
  • Be sure to set the replicateAfter parameter to commit, even if replicateAfter is set to optimize on the main master. This is because on a repeater (or any slave), a commit is called only after the index is downloaded. The optimize command is never called on slaves.
  • Optionally, one can configure the repeater to fetch compressed files from the master through the compression parameter to reduce the index download time.

Here is an example of a ReplicationHandler configuration for a repeater:

<requestHandler name="/replication" class="solr.ReplicationHandler">
<lst name="master">
<str name="replicateAfter">commit</str>
<str name="confFiles">schema.xml,stopwords.txt,synonyms.txt</str>
</lst>
<lst name="slave">
<str name="masterUrl">http://master.solr.company.com:8983/solr/replication</str>
<str name="pollInterval">00:00:60</str>
</lst>
</requestHandler>

Commit and Optimize Operations

When a commit or optimize operation is performed on the master, the RequestHandler reads the list of file names which are associated with each commit point. This relies on the replicateAfter parameter in the configuration to decide which types of events should trigger replication.

Setting on the MasterDescription
commitTriggers replication whenever a commit is performed on the master index.
optimizeTriggers replication whenever the master index is optimized.
startupTriggers replication whenever the master index starts up.

The replicateAfter parameter can accept multiple arguments. For example:

<str name="replicateAfter">startup</str>
<str name="replicateAfter">commit</str>
<str name="replicateAfter">optimize</str>

Slave Replication

The master is totally unaware of the slaves. The slave continuously keeps polling the master (depending on the pollInterval parameter) to check the current index version of the master. If the slave finds out that the master has a newer version of the index it initiates a replication process. The steps are as follows:

  • The slave issues a filelist command to get the list of the files. This command returns the names of the files as well as some metadata (for example, size, a lastmodified timestamp, an alias if any).
  • The slave checks with its own index if it has any of those files in the local index. It then runs the filecontent command to download the missing files. This uses a custom format (akin to the HTTP chunked encoding) to download the full content or a part of each file. If the connection breaks in between , the download resumes from the point it failed. At any point, the slave tries 5 times before giving up a replication altogether.
  • The files are downloaded into a temp directory, so that if either the slave or the master crashes during the download process, no files will be corrupted. Instead, the current replication will simply abort.
  • After the download completes, all the new files are moved to the live index directory and the file's timestamp is same as its counterpart on the master.
  • A commit command is issued on the slave by the Slave's ReplicationHandler and the new index is loaded.

Replicating Configuration Files

To replicate configuration files, list them using using the confFiles parameter. Only files found in the conf directory of the master's Solr instance will be replicated.

Solr replicates configuration files only when the index itself is replicated. That means even if a configuration file is changed on the master, that file will be replicated only after there is a new commit/optimize on master's index.

Unlike the index files, where the timestamp is good enough to figure out if they are identical, configuration files are compared against their checksum. The schema.xml files (on master and slave) are judged to be identical if their checksums are identical.

As a precaution when replicating configuration files, Solr copies configuration files to a temporary directory before moving them into their ultimate location in the conf directory. The old configuration files are then renamed and kept in the same conf/ directory. The ReplicationHandler does not automatically clean up these old files.

If a replication involved downloading of at least one configuration file, the ReplicationHandler issues a core-reload command instead of a commit command.

Resolving Corruption Issues on Slave Servers

If documents are added to the slave, then the slave is no longer in sync with its master. However, the slave will not undertake any action to put itself in sync, until the master has new index data. When a commit operation takes place on the master, the index version of the master becomes different from that of the slave. The slave then fetches the list of files and finds that some of the files present on the master are also present in the local index but with different sizes and timestamps. This means that the master and slave have incompatible indexes. To correct this problem, the slave then copies all the index files from master to a new index directory and asks the core to load the fresh index from the new directory.

HTTP API Commands for the ReplicationHandler

You can use the HTTP commands below to control the ReplicationHandler's operations.

CommandDescription
http://_master_host_:_port_/solr/replication?command=enablereplicationEnables replication on the master for all its slaves.
http://_master_host_:_port_/solr/replication?command=disablereplicationDisables replication on the master for all its slaves.
http://_host_:_port_/solr/replication?command=indexversionReturns the version of the latest replicatable index on the specified master or slave.
http://_slave_host_:_port_/solr/replication?command=fetchindexForces the specified slave to fetch a copy of the index from its master.

If you like, you can pass an extra attribute such as masterUrl or compression (or any other parameter which is specified in the <lst name="slave"> tag) to do a one time replication from a master. This obviates the need for hard-coding the master in the slave.
http://_slave_host_:_port_/solr/replication?command=abortfetchAborts copying an index from a master to the specified slave.
http://_slave_host_:_port_/solr/replication?command=enablepollEnables the specified slave to poll for changes on the master.
http://_slave_host_:_port_/solr/replication?command=disablepollDisables the specified slave from polling for changes on the master.
http://_slave_host_:_port_/solr/replication?command=detailsRetrieves configuration details and current status.
http://host:port/solr/replication?command=filelist&indexversion=<index-version-number>Retrieves a list of Lucene files present in the specified host's index. You can discover the version number of the index by running the indexversion command.
http://_master_host_:_port_/solr/replication?command=backupCreates a backup on master if there are committed index data in the server; otherwise, does nothing. This command is useful for making periodic backups. The numberToKeep request parameter can be used with the backup command unless the maxNumberOfBackups initialization parameter has been specified on the handler – in which case maxNumberOfBackups is always used and attempts to use the numberToKeep request parameter will cause an error.

Index Replication using ssh and rsync

Solr supports ssh/rsync-based replication. This mechanism only works on systems that support removing open hard links.

Solr distribution is similar in concept to database replication. All collection changes come to one master Solr server. All production queries are done against query slaves. Query slaves receive all their collection changes indirectly — as new versions of a collection which they pull from the master. These collection downloads are polled for on a cron'd basis.

A collection is a directory of many files. Collections are distributed to the slaves as snapshots of these files. Each snapshot is made up of hard links to the files so copying of the actual files is not necessary when snapshots are created. Lucene only significantly rewrites files following an optimization command. Generally, once a file is written, it will change very little, if at all. This makes the underlying transport of rsync very useful. Files that have already been transferred and have not changed do not need to be re-transferred with the new edition of a collection.

The Snapshot and Distribution Process

Here are the steps that Solr follows when replicating an index:

  1. The snapshooter command takes snapshots of the collection on the master. It runs when invoked by Solr after it has done a commit or an optimize.
  2. The snappuller command runs on the query slaves to pull the newest snapshot from the master. This is done via rsync in daemon mode running on the master for better performance and lower CPU utilization over rsync using a remote shell program as the transport.
  3. The snapinstaller runs on the slave after a snapshot has been pulled from the master. This signals the local Solr server to open a new index reader, then auto-warming of the cache(s) begins (in the new reader), while other requests continue to be served by the original index reader. Once auto-warming is complete, Solr retires the old reader and directs all new queries to the newly cache-warmed reader.
  4. All distribution activity is logged and written back to the master to be viewable on the distribution page of its GUI.
  5. Old versions of the index are removed from the master and slave servers by a cron'd snapcleaner.

If you are building an index from scratch, distribution is the final step of the process.

Manual copying of index files is not recommended; however, running distribution commands manually (that is, not relying on crond to run them) is perfectly fine.

Snapshot Directories

Snapshots are stored in directories whose names follow this format: snapshot.yyyymmddHHMMSS

All the files in the index directory are hard links to the latest snapshot. This design offers these advantages:

  • The Solr implementation can keep multiple snapshots on each host without needing to keep multiple copies of index files that have not changed.
  • File copying from master to slave is very fast.
  • Taking a snapshot is very fast as well.

Solr Distribution Scripts

For the Solr distribution scripts, the name of the index directory is defined either by the environment variable data_dir in the configuration file solr/conf/scripts.conf or the command line argument -d. It should match the value used by the Solr server which is defined in solr/conf/solrconfig.xml.

All Solr collection distribution scripts are bundled in a Solr release and reside in the directory solr/src/scripts. It's recommended that you install the scripts in a solr/bin/ directory.

Collection distribution scripts create and prepare for distribution a snapshot of a search collection after each commit and optimize request if the postCommit and postOptimize event listener is configured in solrconfig.xml to execute snapshooter.

The snapshooter script creates a directory snapshot.<ts>, where <ts> is a timestamp in the format, yyyymmddHHMMSS. It contains hard links to the data files.

Snapshots are distributed from the master server when the slaves pull them, "smartcopying" the snapshot directory that contains the hard links to the most recent collection data files.

NameDescription
snapshooterCreates a snapshot of a collection. Snapshooter is normally configured to run on the master Solr server when a commit or optimize happens. Snapshooter can also be run manually, but one must make sure that the index is in a consistent state, which can only be done by pausing indexing and issuing a commit.
snappullerA shell script that runs as a cron job on a slave Solr server. The script looks for new snapshots on the master Solr server and pulls them.
snappuller-enableCreates the file solr/logs/snappuller-enabled, whose presence enables snappuller.
snapinstallerInstalls the latest snapshot (determined by the timestamp) into the place, using hard links (similar to the process of taking a snapshot). Then solr/logs/snapshot.current is written and scp'd (secure copied) back to the master Solr server. snapinstaller then triggers the Solr server to open a new Searcher.
snapcleanerRuns as a cron job to remove snapshots more than a configurable number of days old or all snapshots except for the most recent n number of snapshots. Also can be run manually.
rsyncd-startStarts the rsyncd daemon on the master Solr server which handles collection distribution requests from the slaves.
rsyncd daemonEfficiently synchronizes a collection--between master and slaves--by copying only the files that actually changed. In addition, rsync can optionally compress data before transmitting it.
rsyncd-stopStops the rsyncd daemon on the master Solr server. The stop script then makes sure that the daemon has in fact exited by trying to connect to it for up to 300 seconds. The stop script exits with error code 2 if it fails to stop the rsyncd daemon.
rsyncd-enableCreates the file solr/logs/rsyncd-enabled, whose presence allows the rsyncd daemon to run, allowing replication to occur.
rsyncd-disableRemoves the file solr/logs/rsyncd-enabled, whose absence prevents the rsyncd daemon from running, preventing replication.

For more information about usage arguments and syntax see the SolrCollectionDistributionScripts page on the Solr Wiki.

Solr Distribution-related Cron Jobs

The distribution process is automated through the use of cron jobs. The cron jobs should run under the user ID that the Solr server is running under.

Cron JobDescription
snapcleanerThe snapcleaner job should be run out of cron at the regular basis to clean up old snapshots. This should be done on both the master and slave Solr servers. For example, the following cron job runs everyday at midnight and cleans up snapshots 8 days and older:

0 0 * * * <solr.solr.home>/solr/bin/snapcleaner -D 7

Additional cleanup can always be performed on-demand by running snapcleaner manually.
snappuller snapinstallerOn the slave Solr servers, snappuller should be run out of cron regularly to get the latest index from the master Solr server. It is a good idea to also run snapinstaller with snappuller back-to-back in the same crontab entry to install the latest index once it has been copied over to the slave Solr server.

For example, the following cron job runs every 5 minutes to keep the slave Solr server in sync with the master Solr server:

0,5,10,15,20,25,30,35,40,45,50,55 * * * * <solr.solr.home>/solr/bin/snappuller;<solr.solr.home>/solr/bin/snapinstaller

Modern cron allows this to be shortened to */5 * * * *....

Performance Tuning for Script-based Replication

Because fetching a master index uses the rsync utility, which transfers only the segments that have changed, replication is normally very fast. However, if the master server has been optimized, then rsync may take a long time, because many segments will have been changed in the process of optimization.

  • If replicating to multiple slaves consumes too much network bandwidth, consider the use of a repeater.
  • Make sure that slaves do not pull from the master so frequently that a previous replication is still running when a new one is started. In general, it's best to allow at least a minute for the replication process to complete. But in configurations with low network bandwidth or a very large index, even more time may be required.

Commit and Optimization

On a very large index, adding even a few documents and then running an optimize operation causes the complete index to be rewritten. This consumes a lot of disk I/O and impacts query performance. Optimizing a very large index may even involve copying the index twice and calling optimize at the beginning and at the end. If some documents have been deleted, the first optimize call will rewrite the index even before the second index is merged.

Optimization is an I/O intensive process, as the entire index is read and re-written in optimized form. Anecdotal data shows that optimizations on modest server hardware can take around 5 minutes per GB, although this obviously varies considerably with index fragmentation and hardware bottlenecks. We do not know what happens to query performance on a collection that has not been optimized for a long time. We do know that it will get worse as the collection becomes more fragmented, but how much worse is very dependent on the manner of updates and commits to the collection. The setting of the mergeFactor attribute affects performance as well. Dividing a large index with millions of documents into even as few as five segments may degrade search performance by as much as 15-20%.

While optimizing has many benefits, a rapidly changing index will not retain those benefits for long, and since optimization is an intensive process, it may be better to consider other options, such as lowering the merge factor (discussed in this Guide in the section on Configuring the Lucene Index Writers).

Distribution and Optimization

The time required to optimize a master index can vary dramatically. A small index may be optimized in minutes. A very large index may take hours. The variables include the size of the index and the speed of the hardware.

Distributing a newly optimized collection may take only a few minutes or up to an hour or more, again depending on the size of the index and the performance capabilities of network connections and disks. During optimization the machine is under load and does not process queries very well. Given a schedule of updates being driven a few times an hour to the slaves, we cannot run an optimize with every committed snapshot.

Copying an optimized collection means that the entire collection will need to be transferred during the next snappull. This is a large expense, but not nearly as huge as running the optimize everywhere. Consider this example: on a three-slave one-master configuration, distributing a newly-optimized collection takes approximately 80 seconds total. Rolling the change across a tier would require approximately ten minutes per machine (or machine group). If this optimize were rolled across the query tier, and if each collection being optimized were disabled and not receiving queries, a rollout would take at least twenty minutes and potentially as long as an hour and a half. Additionally, the files would need to be synchronized so that the following rsync, snappull would not think that the independently optimized files were different in any way. This would also leave the door open to independent corruption of collections instead of each being a perfect copy of the master.

Optimizing on the master allows for a straight-forward optimization operation. No query slaves need to be taken out of service. The optimized collection can be distributed in the background as queries are being normally serviced. The optimization can occur at any time convenient to the application providing collection updates.

发表评论