Logstash reporting errors when sending data to Elasticsearch

Logstash is configured to send bulk data to Elasticsearch, but when it sends the bulk data it fails and an error message is received in Logstash similar to the following:

[2021-05-24T15:50:22,049][ERROR][logstash.outputs.elasticsearch][esarchive][72e046782ee7ecdaeef95d897e7dc9fd30f75e2dd3e0273fd878fabac23b9e79] Encountered a retryable error. Will Retry with exponential backoff {:code=>502, :url=>"https://iaf-system-elasticsearch-es.optest:9200/_bulk", :content_length=>362194, :body=>"\r\n<title>502 Bad Gateway</title>\r\n\r\n

502 Bad Gateway
\r\n
nginx\r\n\r\n\r\n"}The requested API: <API> with method GET: is not listed in the Allow List.

The Elasticsearch tls-proxy container also produces an error as shown:

iaf-system-elasticsearch-es-master-data-0 tls-proxy 2021/05/24 15:36:30 [error] 11#0: *1483 upstream sent too big header while reading response header from upstream, client: 10.254.17.31, server: tls-proxy-9200, request: "POST /_bulk HTTP/1.1", upstream: "http://127.0.0.1:9201/_bulk", host: "iaf-system-elasticsearch-es.optest:9200"

Cause

The error is due to the size of the header data that is being sent to Elasticsearch as part of the /_bulk request and the tls-proxy container not being able to process it, which causes the request to fail.

Resolving the problem

To resolve the issue, you need to place a limit on the size of data per batch (used for Elasticsearch bulk query) in your Logstash configuration. Following is an example of the configuration setting:

pipeline.batch.size: 35
pipeline.batch.delay: 10

Note: The values that are used for these settings depend on the size of data that is being sent and would need to be adjusted.