Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add influx push endpoint to mimir #10153

Open
wants to merge 50 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 40 commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
444d34f
olegs base commits from #1971
alexgreenbank Dec 6, 2024
adb376a
move top level influx files
alexgreenbank Dec 6, 2024
3db179f
latest wip
alexgreenbank Dec 9, 2024
295fa8d
still WIP but better, still need to move to parserFunc() style
alexgreenbank Dec 10, 2024
d2897e3
it builds!
alexgreenbank Dec 10, 2024
c2fb679
tweaks and add span logging
alexgreenbank Dec 11, 2024
593b2e2
more todo
alexgreenbank Dec 11, 2024
31bf23d
further tweaks
alexgreenbank Dec 12, 2024
2487a88
some fixes to tests
alexgreenbank Dec 12, 2024
03e8e3c
rejigged error handling, tests passing
alexgreenbank Dec 12, 2024
0bd8da8
add vendored influxdb code
alexgreenbank Dec 12, 2024
4a76a11
lint
alexgreenbank Dec 13, 2024
066c009
go mod sum vendor/modules.txt
alexgreenbank Dec 13, 2024
bec3a26
add a metric, add tenant info, other tweaks
alexgreenbank Dec 17, 2024
ac51def
various rework, still WIP
alexgreenbank Dec 17, 2024
3a57dc6
propagate bytesRead down to caller and log and histogram
alexgreenbank Dec 17, 2024
92379e4
remove comment now dealt with
alexgreenbank Dec 17, 2024
d44c71d
add defaults in error handling
alexgreenbank Dec 17, 2024
591389e
Add note to docs about experimental Influx flag
alexgreenbank Dec 17, 2024
afbc357
Note influx endpoint as experimental too
alexgreenbank Dec 17, 2024
847bcb9
test for specific errors received
alexgreenbank Dec 17, 2024
320c467
bolster parser tests
alexgreenbank Dec 17, 2024
730a7c3
Use literal chars rather than ascii codes
alexgreenbank Dec 17, 2024
de27d4b
remove unnecessary cast to int()
alexgreenbank Dec 18, 2024
af3def1
use mimirpb.PreallocTimeseries in influx parser
alexgreenbank Dec 19, 2024
d65b3a5
remove unnecessary tryUnwrap()
alexgreenbank Dec 19, 2024
9d94276
Work on byteslice rather than chars
alexgreenbank Dec 19, 2024
258fe0d
yoloString for label value as push code does not keep references to s…
alexgreenbank Dec 19, 2024
e5252d4
update go.sum
alexgreenbank Dec 19, 2024
f86691b
gah go.sum
alexgreenbank Dec 19, 2024
32cc156
oops, missed removal of paramter to InfluxHandler()
alexgreenbank Dec 19, 2024
c798360
wrong metrics incremented
alexgreenbank Dec 19, 2024
8d4e7ca
lint
alexgreenbank Dec 19, 2024
e915764
lint
alexgreenbank Dec 19, 2024
013b3d6
go mod tidy && go mod vendor
alexgreenbank Dec 19, 2024
3c5a166
go.sum conflict
alexgreenbank Dec 19, 2024
773722f
merge latest main
alexgreenbank Jan 2, 2025
6143162
make doc
alexgreenbank Jan 2, 2025
ac4e491
Merge branch 'main' into alexg/influx-push-handler
alexgreenbank Jan 7, 2025
767695a
make influx config hidden/experimental
alexgreenbank Jan 9, 2025
419d327
fix byteslice handling in replaceInvalidChars()
alexgreenbank Jan 10, 2025
9e9e117
remove unnecessary TODOs
alexgreenbank Jan 10, 2025
0da4b8f
influx: happy path e2e test
alexgreenbank Jan 10, 2025
c470fb3
lint
alexgreenbank Jan 10, 2025
c44d321
consolidate logging
alexgreenbank Jan 10, 2025
537fa37
CHANGELOG
alexgreenbank Jan 10, 2025
14bae20
about-versioning.md
alexgreenbank Jan 10, 2025
b951127
Merge branch 'main' into alexg/influx-push-handler
alexgreenbank Jan 10, 2025
9d035f5
merge main
alexgreenbank Jan 10, 2025
c31c191
Merge branch 'alexg/influx-push-handler' of github.com:grafana/mimir …
alexgreenbank Jan 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/sources/mimir/configure/about-versioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,9 @@ The following features are currently experimental:
- Cache rule group contents.
- `-ruler-storage.cache.rule-group-enabled`
- Distributor
- Influx ingestion
- `/api/v1/influx/push` endpoint
- `-distributor.max-influx-request-size`
- Metrics relabeling
- `-distributor.metric-relabeling-enabled`
- Using status code 529 instead of 429 upon rate limit exhaustion.
Expand Down
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ require (
github.com/grafana/dskit v0.0.0-20250106205746-3702098cbd0c
github.com/grafana/e2e v0.1.2-0.20240118170847-db90b84177fc
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/influxdata/influxdb/v2 v2.7.11
github.com/json-iterator/go v1.1.12
github.com/minio/minio-go/v7 v7.0.82
github.com/mitchellh/go-wordwrap v1.0.1
Expand Down
6 changes: 4 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -1394,6 +1394,8 @@ github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4=
github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/influxdata/influxdb/v2 v2.7.11 h1:qs9qr5hsuFrlTiBtr5lBrALbQ2dHAanf21fBLlLpKww=
github.com/influxdata/influxdb/v2 v2.7.11/go.mod h1:zNOyzQy6WbfvGi1CK1aJ2W8khOq9+Gdsj8yLj8bHHqg=
github.com/ionos-cloud/sdk-go/v6 v6.3.0 h1:/lTieTH9Mo/CWm3cTlFLnK10jgxjUGkAqRffGqvPteY=
github.com/ionos-cloud/sdk-go/v6 v6.3.0/go.mod h1:SXrO9OGyWjd2rZhAhEpdYN6VUAODzzqRdqA9BCviQtI=
github.com/jessevdk/go-flags v1.5.0 h1:1jKYvbxEjfUl0fmqTCOfonvskHHXMjBySTLW4y9LFvc=
Expand Down Expand Up @@ -1553,8 +1555,8 @@ github.com/onsi/gomega v1.24.0 h1:+0glovB9Jd6z3VR+ScSwQqXVTIfJcGA9UBM8yzQxhqg=
github.com/onsi/gomega v1.24.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2vQAg=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.1.0-rc2 h1:2zx/Stx4Wc5pIPDvIxHXvXtQFW/7XWJGmnM7r3wg034=
github.com/opencontainers/image-spec v1.1.0-rc2/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.2.1-0.20220228012449-10b1cf09e00b h1:FfH+VrHHk6Lxt9HdVS0PXzSXFyS2NbZKXv33FYPol0A=
github.com/opentracing/opentracing-go v1.2.1-0.20220228012449-10b1cf09e00b/go.mod h1:AC62GU6hc0BrNm+9RK9VSiwa/EUe1bkIeFORAMcHvJU=
Expand Down
9 changes: 9 additions & 0 deletions pkg/api/api.go
Original file line number Diff line number Diff line change
Expand Up @@ -256,6 +256,7 @@ func (a *API) RegisterRuntimeConfig(runtimeConfigHandler http.HandlerFunc, userL

const PrometheusPushEndpoint = "/api/v1/push"
const OTLPPushEndpoint = "/otlp/v1/metrics"
const InfluxPushEndpoint = "/api/v1/influx/push"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this path already an established thing? I feel like it should be /influx/v1/push instead (/api comes from the times when we only had Prometheus, and OTLP is under /otlp)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, sadly it's burned into many of the agents. I'd love the prefix to be /influx but I don't think it is possible.

I will make one final check though.


// RegisterDistributor registers the endpoints associated with the distributor.
func (a *API) RegisterDistributor(d *distributor.Distributor, pushConfig distributor.Config, reg prometheus.Registerer, limits *validation.Overrides) {
Expand All @@ -265,6 +266,14 @@ func (a *API) RegisterDistributor(d *distributor.Distributor, pushConfig distrib
pushConfig.MaxRecvMsgSize, d.RequestBufferPool, a.sourceIPs, a.cfg.SkipLabelNameValidationHeader,
a.cfg.SkipLabelCountValidationHeader, limits, pushConfig.RetryConfig, d.PushWithMiddlewares, d.PushMetrics, a.logger,
), true, false, "POST")

if pushConfig.EnableInfluxEndpoint {
// The Influx Push endpoint is experimental.
a.RegisterRoute(InfluxPushEndpoint, distributor.InfluxHandler(
pushConfig.MaxInfluxRequestSize, d.RequestBufferPool, a.sourceIPs, pushConfig.RetryConfig, d.PushWithMiddlewares, d.PushMetrics, a.logger,
), true, false, "POST")
}

a.RegisterRoute(OTLPPushEndpoint, distributor.OTLPHandler(
pushConfig.MaxOTLPRequestSize, d.RequestBufferPool, a.sourceIPs, limits, pushConfig.OTelResourceAttributePromotionConfig,
pushConfig.RetryConfig, pushConfig.EnableStartTimeQuietZero, d.PushWithMiddlewares, d.PushMetrics, reg, a.logger,
Expand Down
40 changes: 39 additions & 1 deletion pkg/distributor/distributor.go
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,8 @@ const (
// metaLabelTenantID is the name of the metric_relabel_configs label with tenant ID.
metaLabelTenantID = model.MetaLabelPrefix + "tenant_id"

maxOTLPRequestSizeFlag = "distributor.max-otlp-request-size"
maxOTLPRequestSizeFlag = "distributor.max-otlp-request-size"
maxInfluxRequestSizeFlag = "distributor.max-influx-request-size"

instanceIngestionRateTickInterval = time.Second

Expand Down Expand Up @@ -208,6 +209,7 @@ type Config struct {

MaxRecvMsgSize int `yaml:"max_recv_msg_size" category:"advanced"`
MaxOTLPRequestSize int `yaml:"max_otlp_request_size" category:"experimental"`
MaxInfluxRequestSize int `yaml:"max_influx_request_size" category:"experimental" doc:"hidden"`
MaxRequestPoolBufferSize int `yaml:"max_request_pool_buffer_size" category:"experimental"`
RemoteTimeout time.Duration `yaml:"remote_timeout" category:"advanced"`

Expand Down Expand Up @@ -250,6 +252,9 @@ type Config struct {
// OTelResourceAttributePromotionConfig allows for specializing OTel resource attribute promotion.
OTelResourceAttributePromotionConfig OTelResourceAttributePromotionConfig `yaml:"-"`

// Influx endpoint disabled by default
EnableInfluxEndpoint bool `yaml:"influx_endpoint_enabled" category:"experimental" doc:"hidden"`

// Change the implementation of OTel startTime from a real zero to a special NaN value.
EnableStartTimeQuietZero bool `yaml:"start_time_quiet_zero" category:"advanced" doc:"hidden"`
}
Expand All @@ -266,9 +271,11 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet, logger log.Logger) {

f.IntVar(&cfg.MaxRecvMsgSize, "distributor.max-recv-msg-size", 100<<20, "Max message size in bytes that the distributors will accept for incoming push requests to the remote write API. If exceeded, the request will be rejected.")
f.IntVar(&cfg.MaxOTLPRequestSize, maxOTLPRequestSizeFlag, 100<<20, "Maximum OTLP request size in bytes that the distributors accept. Requests exceeding this limit are rejected.")
f.IntVar(&cfg.MaxInfluxRequestSize, maxInfluxRequestSizeFlag, 100<<20, "Maximum Influx request size in bytes that the distributors accept. Requests exceeding this limit are rejected.")
f.IntVar(&cfg.MaxRequestPoolBufferSize, "distributor.max-request-pool-buffer-size", 0, "Max size of the pooled buffers used for marshaling write requests. If 0, no max size is enforced.")
f.DurationVar(&cfg.RemoteTimeout, "distributor.remote-timeout", 2*time.Second, "Timeout for downstream ingesters.")
f.BoolVar(&cfg.WriteRequestsBufferPoolingEnabled, "distributor.write-requests-buffer-pooling-enabled", true, "Enable pooling of buffers used for marshaling write requests.")
f.BoolVar(&cfg.EnableInfluxEndpoint, "distributor.influx-endpoint-enabled", false, "Enable Influx endpoint.")
f.IntVar(&cfg.ReusableIngesterPushWorkers, "distributor.reusable-ingester-push-workers", 2000, "Number of pre-allocated workers used to forward push requests to the ingesters. If 0, no workers will be used and a new goroutine will be spawned for each ingester push request. If not enough workers available, new goroutine will be spawned. (Note: this is a performance optimization, not a limiting feature.)")
f.BoolVar(&cfg.EnableStartTimeQuietZero, "distributor.otel-start-time-quiet-zero", false, "Change the implementation of OTel startTime from a real zero to a special NaN value.")

Expand All @@ -294,12 +301,29 @@ const (
)

type PushMetrics struct {
// Influx metrics.
influxRequestCounter *prometheus.CounterVec
influxUncompressedBodySize *prometheus.HistogramVec
// TODO(alexg): more influx metrics here?
// OTLP metrics.
otlpRequestCounter *prometheus.CounterVec
uncompressedBodySize *prometheus.HistogramVec
}

func newPushMetrics(reg prometheus.Registerer) *PushMetrics {
return &PushMetrics{
influxRequestCounter: promauto.With(reg).NewCounterVec(prometheus.CounterOpts{
Name: "cortex_distributor_influx_requests_total",
Help: "The total number of Influx requests that have come in to the distributor.",
}, []string{"user"}),
// TODO(alexg): separate from uncompressedBodySize?
influxUncompressedBodySize: promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{
Name: "cortex_distributor_influx_uncompressed_request_body_size_bytes",
Help: "Size of uncompressed request body in bytes.",
NativeHistogramBucketFactor: 1.1,
NativeHistogramMinResetDuration: 1 * time.Hour,
NativeHistogramMaxBucketNumber: 100,
}, []string{"user"}),
otlpRequestCounter: promauto.With(reg).NewCounterVec(prometheus.CounterOpts{
Name: "cortex_distributor_otlp_requests_total",
Help: "The total number of OTLP requests that have come in to the distributor.",
Expand All @@ -314,6 +338,18 @@ func newPushMetrics(reg prometheus.Registerer) *PushMetrics {
}
}

func (m *PushMetrics) IncInfluxRequest(user string) {
if m != nil {
m.influxRequestCounter.WithLabelValues(user).Inc()
}
}

func (m *PushMetrics) ObserveInfluxUncompressedBodySize(user string, size float64) {
if m != nil {
m.influxUncompressedBodySize.WithLabelValues(user).Observe(size)
}
}

func (m *PushMetrics) IncOTLPRequest(user string) {
if m != nil {
m.otlpRequestCounter.WithLabelValues(user).Inc()
Expand All @@ -327,6 +363,8 @@ func (m *PushMetrics) ObserveUncompressedBodySize(user string, size float64) {
}

func (m *PushMetrics) deleteUserMetrics(user string) {
m.influxRequestCounter.DeleteLabelValues(user)
m.influxUncompressedBodySize.DeleteLabelValues(user)
m.otlpRequestCounter.DeleteLabelValues(user)
m.uncompressedBodySize.DeleteLabelValues(user)
}
Expand Down
154 changes: 154 additions & 0 deletions pkg/distributor/influx.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
// SPDX-License-Identifier: AGPL-3.0-only

package distributor

import (
"context"
"errors"
"net/http"

"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/dskit/grpcutil"
"github.com/grafana/dskit/httpgrpc"
"github.com/grafana/dskit/middleware"
"github.com/grafana/dskit/tenant"
influxio "github.com/influxdata/influxdb/v2/kit/io"

"github.com/grafana/mimir/pkg/distributor/influxpush"
"github.com/grafana/mimir/pkg/mimirpb"
"github.com/grafana/mimir/pkg/util"
utillog "github.com/grafana/mimir/pkg/util/log"
"github.com/grafana/mimir/pkg/util/spanlogger"
)

func influxRequestParser(ctx context.Context, r *http.Request, maxSize int, _ *util.RequestBuffers, req *mimirpb.PreallocWriteRequest, logger log.Logger) (int, error) {
spanLogger, ctx := spanlogger.NewWithLogger(ctx, logger, "Distributor.InfluxHandler.decodeAndConvert")
defer spanLogger.Span.Finish()

spanLogger.SetTag("content_type", r.Header.Get("Content-Type"))
spanLogger.SetTag("content_encoding", r.Header.Get("Content-Encoding"))
spanLogger.SetTag("content_length", r.ContentLength)

pts, bytesRead, err := influxpush.ParseInfluxLineReader(ctx, r, maxSize)
level.Debug(spanLogger).Log("msg", "decodeAndConvert complete", "bytesRead", bytesRead)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say we should also log err here (just in case it wasn't nil)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, done in next commit.

if err != nil {
level.Error(logger).Log("msg", "failed to parse Influx push request", "err", err)
return bytesRead, err
}

level.Debug(spanLogger).Log(
"msg", "Influx to Prometheus conversion complete",
"metric_count", len(pts),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's no need for a separate log here, just log the len(pts) in the call above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, done in next commit.


req.Timeseries = pts
return bytesRead, nil
}

// InfluxHandler is a http.Handler which accepts Influx Line protocol and converts it to WriteRequests.
func InfluxHandler(
maxRecvMsgSize int,
requestBufferPool util.Pool,
sourceIPs *middleware.SourceIPExtractor,
retryCfg RetryConfig,
push PushFunc,
pushMetrics *PushMetrics,
logger log.Logger,
) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
logger := utillog.WithContext(ctx, logger)
if sourceIPs != nil {
source := sourceIPs.Get(r)
if source != "" {
logger = utillog.WithSourceIPs(source, logger)
}
}

tenantID, err := tenant.TenantID(ctx)
if err != nil {
level.Warn(logger).Log("msg", "unable to obtain tenantID", "err", err)
return
}

pushMetrics.IncInfluxRequest(tenantID)

var bytesRead int

supplier := func() (*mimirpb.WriteRequest, func(), error) {
rb := util.NewRequestBuffers(requestBufferPool)
var req mimirpb.PreallocWriteRequest

if bytesRead, err = influxRequestParser(ctx, r, maxRecvMsgSize, rb, &req, logger); err != nil {
err = httpgrpc.Error(http.StatusBadRequest, err.Error())
rb.CleanUp()
return nil, nil, err
}

cleanup := func() {
mimirpb.ReuseSlice(req.Timeseries)
rb.CleanUp()
}
return &req.WriteRequest, cleanup, nil
}

pushMetrics.ObserveInfluxUncompressedBodySize(tenantID, float64(bytesRead))

req := newRequest(supplier)
// https://docs.influxdata.com/influxdb/cloud/api/v2/#tag/Response-codes
if err := push(ctx, req); err != nil {
if errors.Is(err, context.Canceled) {
level.Warn(logger).Log("msg", "push request canceled", "err", err)
w.WriteHeader(statusClientClosedRequest)
return
}
if errors.Is(err, influxio.ErrReadLimitExceeded) {
// TODO(alexg): One thing we have seen in the past is that telegraf clients send a batch of data
// if it is too big they should respond to the 413 below, but if a client doesn't understand this
// it just sends the next batch that is even bigger. In the past this has had to be dealt with by
// adding rate limits to drop the payloads.
level.Warn(logger).Log("msg", "request too large", "err", err, "bytesRead", bytesRead, "maxMsgSize", maxRecvMsgSize)
w.WriteHeader(http.StatusRequestEntityTooLarge)
return
}
// From: https://github.com/grafana/influx2cortex/blob/main/pkg/influx/errors.go

var httpCode int
var errorMsg string

if st, ok := grpcutil.ErrorToStatus(err); ok {
// This code is needed for a correct handling of errors returned by the supplier function.
// These errors are created by using the httpgrpc package.
httpCode = int(st.Code())
errorMsg = st.Message()
} else {
var distributorErr Error
errorMsg = err.Error()
if errors.Is(err, context.DeadlineExceeded) || !errors.As(err, &distributorErr) {
httpCode = http.StatusServiceUnavailable
} else {
httpCode = errorCauseToHTTPStatusCode(distributorErr.Cause(), false)
}
}
if httpCode != 202 {
// This error message is consistent with error message in Prometheus remote-write handler, and ingester's ingest-storage pushToStorage method.
msgs := []interface{}{"msg", "detected an error while ingesting Influx metrics request (the request may have been partially ingested)", "httpCode", httpCode, "err", err}
if httpCode/100 == 4 {
// This tag makes the error message visible for our Grafana Cloud customers.
msgs = append(msgs, "insight", true)
}
level.Error(logger).Log(msgs...)
}
if httpCode < 500 {
level.Info(logger).Log("msg", errorMsg, "response_code", httpCode, "err", err)
} else {
level.Warn(logger).Log("msg", errorMsg, "response_code", httpCode, "err", err)
}
addHeaders(w, err, r, httpCode, retryCfg)
w.WriteHeader(httpCode)
} else {
w.WriteHeader(http.StatusNoContent) // Needed for Telegraf, otherwise it tries to marshal JSON and considers the write a failure.
}
})
}
Loading
Loading