From ca9bdca092aed4a8bf77f4e963ca435ec92bc31e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Piotr=20Kie=C5=82kowicz?= Date: Wed, 22 Nov 2023 18:38:11 +0100 Subject: [PATCH 01/12] Update metrics --- .../dotnet-metrics-attributes.rst | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst b/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst index 2c0b98935..c2d2efb91 100644 --- a/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst +++ b/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst @@ -121,6 +121,114 @@ Instrumentation metrics The Splunk Distribution of OpenTelemetry .NET can collect the following instrumentation metrics: +ASP.NET Core +------------------------- + +.. list-table:: + :header-rows: 1 + :widths: 40 10 50 + :width: 100% + + * - Metric + - Type + - Description + * - ``http.server.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - Duration of the inbound HTTP request, in the form of count, sum, and histogram buckets. This metric originates multiple metric time series, which might result in increased data ingestion costs. Supported only on .NET prior to 8. + * - ``http.server.active_requests`` + - Gauge + - Number of active HTTP server requests. Supported only on .NET8+. + * - ``http.server.request.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - Duration of HTTP server requests. Supported only on .NET8+. + * - kestrel.active_connections + - Gauge + - Number of connections that are currently active on the server. Supported only on .NET8+. + * - ``kestrel.connection.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - The duration of connections on the server. Supported only on .NET8+. + * - ``kestrel.rejected_connections`` + - Cumulative counters + - Number of connections rejected by the server. Connections are rejected when the currently active count exceeds the value configured with MaxConcurrentConnections. Supported only on .NET8+. + * - ``kestrel.queued_connections`` + - Gauge + - Number of connections that are currently queued and are waiting to start. Supported only on .NET8+. + * - ``kestrel.queued_requests`` + - Gauge + - Number of HTTP requests on multiplexed connections (HTTP/2 and HTTP/3) that are currently queued and are waiting to start. Supported only on .NET8+. + * - ``kestrel.upgraded_connections`` + - Gauge + - Number of HTTP connections that are currently upgraded (WebSockets). The number only tracks HTTP/1.1 connections. Supported only on .NET8+. + * - ``kestrel.tls_handshake.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - The duration of TLS handshakes on the server. Supported only on .NET8+. + * - ``kestrel.active_tls_handshakes`` + - Gauge + - Number of TLS handshakes that are currently in progress on the server. Supported only on .NET8+. + * - ``signalr.server.connection.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - The duration of connections on the server. Supported only on .NET8+. + * - ``signalr.server.active_connections`` + - Gauge + - Number of connections that are currently active on the server. Supported only on .NET8+. + * - ``aspnetcore.routing.match_attempts`` + - Cumulative counters + - Number of requests that were attempted to be matched to an endpoint. Supported only on .NET8+. + * - ``aspnetcore.diagnostics.exceptions`` + - Cumulative counters + - Number of exceptions caught by exception handling middleware. Supported only on .NET8+. + * - ``aspnetcore.rate_limiting.active_request_leases`` + - Gauge + - Number of HTTP requests that are currently active on the server that hold a rate limiting lease. Supported only on .NET8+. + * - ``aspnetcore.rate_limiting.request_lease.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - The duration of rate limiting leases held by HTTP requests on the server. Supported only on .NET8+. + * - ``aspnetcore.rate_limiting.queued_requests`` + - Gauge + - Number of HTTP requests that are currently queued, waiting to acquire a rate limiting lease. Supported only on .NET8+. + * - ``aspnetcore.rate_limiting.request.time_in_queue_{bucket|count|sum}`` + - Cumulative counters (histogram) + - The duration of HTTP requests in a queue, waiting to acquire a rate limiting lease. Supported only on .NET8+. + * - ``aspnetcore.rate_limiting.requests`` + - Cumulative counters + - Number of requests that tried to acquire a rate limiting lease. Requests could be rejected by global or endpoint rate limiting policies. Or the request could be canceled while waiting for the lease. Supported only on .NET8+. + +HTTP Client +------------------------- + +.. list-table:: + :header-rows: 1 + :widths: 40 10 50 + :width: 100% + + * - Metric + - Type + - Description + * - ``http.client.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - Duration of outbound HTTP requests, in the form of count, sum, and histogram buckets. This metric originates multiple metric time series, which might result in increased data ingestion costs. Supported only on .NET prior to 8. + * - ``http.client.active_requests`` + - Gauge + - Number of outbound HTTP requests that are currently active on the client. Supported only on .NET8+. + * - ``http.client.request.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - Duration of HTTP client requests. Supported only on .NET8+. + * - ``http.client.open_connections`` + - Gauge + - Number of outbound HTTP connections that are currently active or idle on the client. Supported only on .NET8+. + * - ``http.client.connection.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - The duration of successfully established outbound HTTP connections. Supported only on .NET8+. + * - ``http.client.request.time_in_queue_{bucket|count|sum}`` + - Cumulative counters (histogram) + - The amount of time requests spent on a queue waiting for an available connection. Supported only on .NET8+. + * - ``dns.lookup.duration_{bucket|count|sum}`` + - Cumulative counters (histogram) + - Measures the time taken to perform a DNS lookup. Supported only on .NET8+. + +NServiceBus +------------------------- + .. list-table:: :header-rows: 1 :widths: 40 10 50 From 70037ec3b8f7b7d99789e3ad0198fb2bc5ef65a3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Piotr=20Kie=C5=82kowicz?= Date: Wed, 22 Nov 2023 18:38:47 +0100 Subject: [PATCH 02/12] sql client support --- .../application/otel-dotnet/dotnet-requirements.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/gdi/get-data-in/application/otel-dotnet/dotnet-requirements.rst b/gdi/get-data-in/application/otel-dotnet/dotnet-requirements.rst index 29211f2bf..7f4e6af83 100644 --- a/gdi/get-data-in/application/otel-dotnet/dotnet-requirements.rst +++ b/gdi/get-data-in/application/otel-dotnet/dotnet-requirements.rst @@ -111,8 +111,8 @@ Traces instrumentations - Experimental Beta - Third-party support - ``NSERVICEBUS`` - * - Microsoft.Data.SqlClient and |br| System.Data.SqlClient - - Version 3.* is not supported on .NET Framework + * - Microsoft.Data.SqlClient, |br| System.Data.SqlClient, |br| System.Data + - Version 3.* is not supported on .NET Framework |br| 4.8.5 and higher |br| versions shipped with .NET Framework - Experimental Beta - Community support - ``SQLCLIENT`` From 6ee94771ea36a2f338e4a8ddb25e74d70201fba3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Piotr=20Kie=C5=82kowicz?= Date: Wed, 22 Nov 2023 18:48:16 +0100 Subject: [PATCH 03/12] Cleanup --- .../otel-dotnet/configuration/dotnet-metrics-attributes.rst | 6 ------ 1 file changed, 6 deletions(-) diff --git a/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst b/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst index c2d2efb91..f6510c7c0 100644 --- a/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst +++ b/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst @@ -237,12 +237,6 @@ NServiceBus * - Metric - Type - Description - * - ``http.client.duration_{bucket|count|sum}`` - - Cumulative counters (histogram) - - Duration of outbound HTTP requests, in the form of count, sum, and histogram buckets. This metric originates multiple metric time series, which might result in increased data ingestion costs. - * - ``http.server.duration_{bucket|count|sum}`` - - Cumulative counters (histogram) - - Duration of the inbound HTTP request, in the form of count, sum, and histogram buckets. This metric originates multiple metric time series, which might result in increased data ingestion costs. * - ``nservicebus.messaging.successes`` - Cumulative counter - Number of messages successfully processed by the endpoint. From 0ba5dd2f68f3f027fd7a6d72b76ddda7f4a307de Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 23 Nov 2023 10:27:07 +0100 Subject: [PATCH 04/12] Advance configuration --- .../kubernetes-config-advanced.rst | 132 ++++++++---------- .../network-explorer-setup.rst | 106 +++++++------- 2 files changed, 112 insertions(+), 126 deletions(-) diff --git a/gdi/opentelemetry/kubernetes-config-advanced.rst b/gdi/opentelemetry/kubernetes-config-advanced.rst index 20632fccb..37a69424b 100644 --- a/gdi/opentelemetry/kubernetes-config-advanced.rst +++ b/gdi/opentelemetry/kubernetes-config-advanced.rst @@ -7,11 +7,11 @@ Advanced configuration for Kubernetes .. meta:: :description: Advanced configurations for the Splunk Distribution of OpenTelemetry Collector for Kubernetes. -See the following advanced configuration options for the Collector for Kubernetes. +See the following advanced configuration options for the Collector for Kubernetes. -For basic Helm chart configuration, see :ref:`otel-kubernetes-config`. For log configuration, refer to :ref:`otel-kubernetes-config-logs`. +For basic Helm chart configuration, see :ref:`otel-kubernetes-config`. For log configuration, see :ref:`otel-kubernetes-config-logs`. -.. note:: +.. note:: The :new-page:`values.yaml ` file lists all supported configurable parameters for the Helm chart, along with a detailed explanation of each parameter. :strong:`Review it to understand how to configure this chart`. @@ -20,7 +20,7 @@ For basic Helm chart configuration, see :ref:`otel-kubernetes-config`. For log c Override the default configuration ============================================================== -You can override the :ref:`default OpenTelemetry agent configuration ` to use your own configuration. To do this, include a custom configuration using the ``agent.config`` parameter in the values.yaml file. For example: +You can override the :ref:`default OpenTelemetry agent configuration ` to use your own configuration. To do this, include a custom configuration using the ``agent.config`` parameter in the values.yaml file. For example: .. code-block:: yaml @@ -41,7 +41,7 @@ You can override the :ref:`default OpenTelemetry agent configuration ` for the default configurations for the control plane receivers. +See the :new-page:`agent template ` for the default configurations for the control plane receivers. -Refer to the following documentation for information on the configuration options and supported metrics for each control plane receiver: +See the following documentation for information on the configuration options and supported metrics for each control plane receiver: * :ref:`CoreDNS `. * :ref:`etcd`. To retrieve etcd metrics, see :new-page:`Setting up etcd metrics `. @@ -88,7 +88,7 @@ Refer to the following documentation for information on the configuration option Known issue ----------------------------------------------------------------------------- -There is a known limitation for the Kubernetes proxy control plane receiver. When using a Kubernetes cluster created via kops, a network connectivity issue prevents proxy metrics from being collected. The limitation can be addressed by updating the kubeProxy metric bind address in the kops cluster specification: +There is a known limitation for the Kubernetes proxy control plane receiver. When using a Kubernetes cluster created using kops, a network connectivity issue prevents proxy metrics from being collected. The limitation can be addressed by updating the kubeProxy metric bind address in the kops cluster specification: #. Set ``kubeProxy.metricsBindAddress: 0.0.0.0`` in the kops cluster specification. #. Run ``kops update cluster {cluster_name}`` and ``kops rolling-update cluster {cluster_name}`` to deploy the change. @@ -96,7 +96,7 @@ There is a known limitation for the Kubernetes proxy control plane receiver. Whe Use custom configurations for non-standard control plane components ----------------------------------------------------------------------------- -You can override the default configuration values used to connect to the control plane. If your control plane uses nonstandard ports or custom TLS settings, you need to override the default configurations. +You can override the default configuration values used to connect to the control plane. If your control plane uses nonstandard ports or custom TLS settings, you need to override the default configurations. The following example shows how to connect to a nonstandard API server that uses port ``3443`` for metrics and custom TLS certs stored in the /etc/myapiserver/ directory. @@ -138,29 +138,29 @@ To run the container in ``non-root`` user mode, use ``agent.securityContext`` to .. note:: Running the collector agent for log collection in non-root mode is not currently supported in CRI-O and OpenShift environments at this time. For more details, see the :new-page:`related GitHub feature request issue `. -Use the Network Explorer to collect telemetry -================================================== - -:new-page:`Network Explorer ` allows you to collect network telemetry and send it to the :ref:`OpenTelemetry Collector gateway `. -To enable the Network Explorer, set the ``enabled`` flag to ``true``: - -.. code-block:: yaml +Collect network telemetry using eBPF +================================================== +You can collect network metrics and analyze them in Network Explorer using the OpenTelemetry eBPF Helm chart. See :ref:`network-explorer-intro` for more information. - networkExplorer: - enabled: true +To install and configure the eBPF Helm chart, see :ref:`ebpf-chart-setup`. -.. caution:: Activating the network explorer automatically activates the OpenTelemetry Collector gateway. +.. note:: The ``networkExplorer`` setting of the Splunk OpenTelemetry Collector Helm chart is deprecated. For instructions on how to migrate from the ``networkExplorer`` setting to the eBPF Helm chart, see :ref:`ebpf-chart-migrate`. Prerequisites ----------------------------------------------------------------------------- -Network Explorer is only supported in the following Kubernetes-based environments on Linux hosts: +The OpenTelemetry eBPF Helm chart requires: + +* Kubernetes 1.24 or higher +* Helm 3.9 or higher + +Network metrics collection is only supported in the following Kubernetes-based environments on Linux hosts: -* RedHat Linux 7.6+ -* Ubuntu 16.04+ -* Debian Stretch+ +* Red Hat Linux 7.6 or higher +* Ubuntu 16.04 or higher +* Debian Stretch or higher * Amazon Linux 2 * Google COS @@ -169,89 +169,77 @@ Modify the reducer footprint The reducer is a single pod per Kubernetes cluster. If your cluster contains a large number of pods, nodes, and services, you can increase the resources allocated to it. -The reducer processes telemetry in multiple stages, with each stage partitioned into one or more shards, where each shard is a separate thread. Increasing the number of shards in each stage expands the capacity of the reducer. There are three stages: ingest, matching, and aggregation. You can set between 1 to 32 shards for each stage. There is one shard per reducer stage by default. +The reducer processes telemetry in multiple stages, with each stage partitioned into 1 or more shards, where each shard is a separate thread. Increasing the number of shards in each stage expands the capacity of the reducer. There are 3 stages: ingest, matching, and aggregation. You can set between 1 to 32 shards for each stage. There is one shard per reducer stage by default. -The following example sets the reducer to use 4 shards per stage. +The following example sets the reducer to use 4 shards per stage: .. code-block:: yaml + reducer: + ingestShards: 4 + matchingShards: 4 + aggregationShards: 4 - networkExplorer: - reducer: - ingestShards: 4 - matchingShards: 4 - aggregationShards: 4 - -Customize network telemetry generated by the Network Explorer +Customize network telemetry generated by eBPF ----------------------------------------------------------------------------- -Metrics can be deactivated, either individually or by entire categories. See the :new-page:`values.yaml ` for a complete list of categories and metrics. +You can deactivate metrics through the Helm chart configuration, either individually or by entire categories. See the :new-page:`values.yaml ` for a complete list of categories and metrics. -To disable an entire category, give the category name, followed by ``.all``: +To deactivate an entire category, give the category name, followed by ``.all``: .. code-block:: yaml + reducer: + disableMetrics: + - tcp.all - networkExplorer: - reducer: - disableMetrics: - - tcp.all - -Disable individual metrics by their names: +Deactivate individual metrics by their names: .. code-block:: yaml + reducer: + disableMetrics: + - tcp.bytes - networkExplorer: - reducer: - disableMetrics: - - tcp.bytes - -You can mix categories and names. For example, yo disable all http metrics and the ``udp.bytes`` metric use: +You can mix categories and names. For example, to turn off all HTTP metrics and the ``udp.bytes`` metric, use: .. code-block:: yaml - - networkExplorer: - reducer: - disableMetrics: - - http.all - - udp.bytes + reducer: + disableMetrics: + - http.all + - udp.bytes Reactivate metrics ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To activate metrics you have deactivated, use ``enableMetrics``. +To activate metrics you previously deactivated, use ``enableMetrics``. -The ``disableMetrics`` flag is evaluated before ``enableMetrics``, so you can deactivate an entire category, then re-activate individual metrics in that category that you are interested in. +The ``disableMetrics`` flag is evaluated before ``enableMetrics``, so you can deactivate an entire category, then reactivate individual metrics in that category that you are interested in. For example, to deactivate all internal and http metrics but keep ``ebpf_net.collector_health``, use: .. code-block:: yaml - - networkExplorer: - reducer: - disableMetrics: - - http.all - - ebpf_net.all - - enableMetrics: - - ebpf_net.collector_health + reducer: + disableMetrics: + - http.all + - ebpf_net.all + enableMetrics: + - ebpf_net.collector_health Configure features using gates ================================================== -Use the ``agent.featureGates``, ``clusterReceiver.featureGates``, and ``gateway.featureGates`` configs to activate or deactivate features of the ``otel-collector`` agent, ``clusterReceiver``, and gateway, respectively. These configs are used to populate the otelcol binary startup argument ``-feature-gates``. +Use the ``agent.featureGates``, ``clusterReceiver.featureGates``, and ``gateway.featureGates`` configs to activate or deactivate features of the ``otel-collector`` agent, ``clusterReceiver``, and gateway, respectively. These configs are used to populate the otelcol binary startup argument ``-feature-gates``. For example, to activate ``feature1`` in the agent, activate ``feature2`` in the ``clusterReceiver``, and deactivate ``feature2`` in the gateway, run: .. code-block:: yaml + helm install {name} --set agent.featureGates=+feature1 --set clusterReceiver.featureGates=feature2 --set gateway.featureGates=-feature2 {other_flags} - helm install {name} --set agent.featureGates=+feature1 --set clusterReceiver.featureGates=feature2 --set gateway.featureGates=-feature2 {other_flags} - -Set the pod security policy manually +Set the pod security policy manually ================================================== Support of Pod Security Policies (PSP) was removed in Kubernetes 1.25. If you still rely on PSPs in an older cluster, you can add PSP manually: @@ -315,7 +303,7 @@ Support of Pod Security Policies (PSP) was removed in Kubernetes 1.25. If you st Configure data persistence queues ================================================== -Without any configuration, data is queued in memory only. When data cannot be sent, it's retried a few times for up to 5 minutes by default, and then dropped. If, for any reason, the Collector is restarted in this period, the queued data will be gone. +Without any configuration, data is queued in memory only. When data can't be sent, it's retried a few times for up to 5 minutes by default, and then dropped. If, for any reason, the Collector is restarted in this period, the queued data is discarded. If you want the queue to be persisted on disk if the Collector restarts, set ``splunkPlatform.sendingQueue.persistentQueue.enabled=true`` to enable support for logs, metrics and traces. @@ -391,6 +379,6 @@ It's not possible to run persistent buffering if there are multiple replicas of Cluster Receiver support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The Cluster receiver is a 1-replica deployment of the OpenTelemetry Collector. Because the Kubernetes control plane can select any available node to run the cluster receiver pod (unless ``clusterReceiver.nodeSelector`` is explicitly set to pin the pod to a specific node), ``hostPath`` or ``local`` volume mounts wouldn't work for such environments. +The Cluster receiver is a 1-replica deployment of the OpenTelemetry Collector. Because the Kubernetes control plane can select any available node to run the cluster receiver pod (unless ``clusterReceiver.nodeSelector`` is explicitly set to pin the pod to a specific node), ``hostPath`` or ``local`` volume mounts don't work for such environments. -Data persistence is currently not applicable to the Kubernetes cluster metrics and Kubernetes events. \ No newline at end of file +Data persistence is currently not applicable to the Kubernetes cluster metrics and Kubernetes events. diff --git a/infrastructure/network-explorer/network-explorer-setup.rst b/infrastructure/network-explorer/network-explorer-setup.rst index 02ef114ef..ab3dd47e9 100644 --- a/infrastructure/network-explorer/network-explorer-setup.rst +++ b/infrastructure/network-explorer/network-explorer-setup.rst @@ -2,7 +2,6 @@ .. _network-explorer-setup: - ******************************************************* Set up Network Explorer ******************************************************* @@ -28,17 +27,17 @@ To use Network Explorer with Kubernetes, you must meet the following requirement * - Environment - Network Explorer is supported in Kubernetes-based environments on Linux hosts. Use Helm-based management. - + * - Operating system - * Linux kernel versions 3.10 to 3.19, 4.0 to 4.20, and 5.0 to 5.19, unless explicitly not allowed. Versions 4.15.0, 4.19.57, and 5.1.16 are not supported - * RedHat Linux versions 7.6 or higher - * Ubuntu versions 16.04 or higher - * Debian Stretch+ - * Amazon Linux 2 - * Google COS + * RedHat Linux versions 7.6 or higher + * Ubuntu versions 16.04 or higher + * Debian Stretch+ + * Amazon Linux 2 + * Google COS * - Kubernetes version - - Network Explorer is supported on all active releases of Kubernetes. For more information, see :new-page:`Releases ` in the Kubernetes documentation. + - Network Explorer is supported on all active releases of Kubernetes. For more information, see :new-page:`Releases ` in the Kubernetes documentation. .. note:: Network Explorer is not compatible with GKE Autopilot clusters. @@ -53,7 +52,7 @@ To use Network Explorer with OpenShift, you must meet the following requirements * - OpenShift version - An on-premises OpenShift cluster or an OpenShift Rosa cluster version 4.12.18 or 4.12.13 - + * - Admin role - You must be an admin in Splunk Observability Cloud to install Network Explorer on OpenShift @@ -70,38 +69,38 @@ Network Explorer consists of the following components: * - :strong:`Component` - :strong:`Description` - :strong:`Required?` - - :strong:`Enabled by default?` + - :strong:`On by default?` * - The reducer - The reducer takes the data points collected by the collectors and reduces them to actual metric time series (MTS). The reducer also connects to the Splunk Distribution of OpenTelemetry Collector on the OTLP gRPC port. - - Yes. Install and configure at least one instance of the reducer. + - Yes. Install and configure at least one instance of the reducer. - Yes * - The kernel collector - The Extended Berkeley Packet Filter (eBPF) agent responsible for gathering data points from the kernel. - - Yes. Install and configure the kernel collector on each of your hosts. + - Yes. Install and configure the kernel collector on each of your hosts. - Yes - * - The Kubernetes collector - - The Kubernetes collector further enriches collected data points with additional metadata. - - No. If you want to get additional metadata, install and configure at least one instance of the Kubernetes collector on each Kubernetes cluster. + * - The Kubernetes collector + - The Kubernetes collector further enriches collected data points with additional metadata. + - No. If you want to get additional metadata, install and configure at least one instance of the Kubernetes collector on each Kubernetes cluster. - Yes. If you want to disable the Kubernetes collector, set ``k8sCollector.enabled`` to ``false``. * - The cloud collector - The cloud collector further enriches collected data points with additional metadata. - No. If your Kubernetes is hosted by, or installed within, AWS, and you want to get additional metadata, install and configure at least one instance of the cloud collector. - No. If you want to enable the cloud collector, set ``cloudCollector.enabled`` to ``true``. - + .. _install-network-explorer: Install Network Explorer ======================================================================================= -For the Splunk Distribution of OpenTelemetry Collector to work with Network Explorer, you must install it in data forwarding (gateway) mode, and perform the following steps: +For the Splunk Distribution of OpenTelemetry Collector to work with Network Explorer, you must install it in data forwarding (gateway) mode, and follow these steps: -- Enable OTLP gRPC reception by configuring an OTLP gRPC metric receiver on the Gateway. -- Enable SignalFx export by configuring a SignalFx exporter on the Gateway with the valid realm and access token. +- Turn on OTLP gRPC reception by configuring an OTLP gRPC metric receiver on the Gateway. +- Turn on SignalFx export by configuring a SignalFx exporter on the Gateway with the valid realm and access token. The OTLP gRPC metric receiver and SignalFx exporter are already configured in the Helm chart for the Splunk Distribution of OpenTelemetry Collector, so if you use the Helm chart method to install the Splunk Distribution of OpenTelemetry Collector, you don't need to configure these requirements separately. @@ -117,16 +116,16 @@ The following table shows required parameters for this installation: * - ``namespace`` - The Kubernetes namespace to install into. This value must match the value for the namespace of the Network Explorer. * - ``splunkObservability.realm`` - - Splunk realm to send telemetry data to. For example, ``us0``. + - Splunk realm to send telemetry data to. For example, ``us0``. * - ``splunkObservability.accessToken`` - - The access token for your organization. An access token with ingest scope is sufficient. For more information, see :ref:`admin-org-tokens`. + - The access token for your organization. An access token with ingest scope is sufficient. For more information, see :ref:`admin-org-tokens`. * - ``clusterName`` - An arbitrary value that identifies your Kubernetes cluster. * - ``networkExplorer.enabled`` - - Set this to ``true`` to enable Network Explorer. + - Set this to ``true`` to activate Network Explorer. * - ``agent.enabled`` - * If you are adding Network Explorer to an existing Splunk Distribution of OpenTelemetry Collector configuration, leave ``agent.enabled`` as is. - * If you are installing a new instance of the Splunk Distribution of OpenTelemetry Collector and only want to collect telemetry from Network Explorer, set this to ``false`` to disable installing the Splunk Distribution of OpenTelemetry Collector in host monitoring (agent) mode on each Kubernetes node. + * If you are installing a new instance of the Splunk Distribution of OpenTelemetry Collector and only want to collect telemetry from Network Explorer, set this to ``false`` to turn off installing the Splunk Distribution of OpenTelemetry Collector in host monitoring (agent) mode on each Kubernetes node. * If you are installing a new instance of the Splunk Distribution of OpenTelemetry Collector and want to collect telemetry from both Network Explorer and the individual OpenTelemetry Collector agents, set this to ``true``. * - ``clusterReceiver.enabled`` - * If you are adding Network Explorer to an existing Splunk Distribution of OpenTelemetry Collector configuration, leave ``clusterReceiver.enabled`` as is. @@ -139,7 +138,7 @@ The following table shows required parameters for this installation: Example: Install Network Explorer for Kubernetes ---------------------------------------------------------- -In this example, the reducer, the kernel collector, and the Kubernetes collector are configured. The cloud collector isn't enabled. +In this example, the reducer, the kernel collector, and the Kubernetes collector are configured. The cloud collector isn't turned on. Follow these steps to install Network Explorer using the Helm chart method: @@ -204,7 +203,7 @@ Follow these steps to install Network Explorer using the Helm chart method: sudo yum install -y kernel-devel-$(uname -r) -For additional Splunk Distribution of OpenTelemetry Collector configuration, see :ref:`otel-install-k8s`. +For additional Splunk Distribution of OpenTelemetry Collector configuration, see :ref:`otel-install-k8s`. Example: Install Network Explorer for OpenShift @@ -212,7 +211,7 @@ Example: Install Network Explorer for OpenShift Follow these steps to install Network Explorer for OpenShift: -#. Each node of an OpenShift cluster runs on Red Hat Enterprise Linux CoreOS, which has SELinux enabled by default. To install the Network Explorer kernel collector, you have to configure Super-Privileged Container (SPC) for SELinux. Run the following script to modify the SELinux SPC policy to allow additional access to ``spc_t`` domain processes. +#. Each node of an OpenShift cluster runs on Red Hat Enterprise Linux CoreOS, which has SELinux enabled by default. To install the Network Explorer kernel collector, you have to configure Super-Privileged Container (SPC) for SELinux. Run the following script to modify the SELinux SPC policy to allow additional access to ``spc_t`` domain processes. .. code-block:: bash @@ -235,7 +234,7 @@ Follow these steps to install Network Explorer for OpenShift: #. Run the following commands to deploy the Helm chart. .. code-block:: bash - + helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart #. Run the following command to update the Helm chart. @@ -272,7 +271,7 @@ Follow these steps to install Network Explorer for OpenShift: oc adm policy add-scc-to-user privileged -z my-splunk-otel-collector-kernel-collector -n -#. Run the following command to update the default security context constraints (SCC) for your OpenShift cluster, so that images are not forced to run as a pre-allocated User Identifier, without granting everyone access to the privileged SCC. +#. Run the following command to update the default security context constraints (SCC) for your OpenShift cluster, so that images are not forced to run as a pre-allocated User Identifier, without granting everyone access to the privileged SCC. .. code-block:: bash @@ -344,7 +343,7 @@ Change the resource footprint of the reducer The reducer is a single pod per Kubernetes cluster. If your cluster contains a large number of pods, nodes, and services, you can increase the resources allocated to it. The reducer processes telemetry in multiple stages, with each stage partitioned into one or more shards, where each shard is a separate thread. Increasing the number of shards in each stage expands the capacity of the reducer. - + Change the following parameters in the :new-page:`Splunk Distribution of OpenTelemetry Collector values file ` to increase or decrease the number of shards per reducer stage. You can set between 1-32 shards. The default configuration is 1 shard per reducer stage. @@ -389,19 +388,19 @@ Customize network telemetry generated by Network Explorer If you want to collect fewer or more network telemetry metrics, you can update the :new-page:`Splunk Distribution of OpenTelemetry Collector values file `. -The following sections show you how to disable or enable different metrics. +The following sections show you how to turn off or turn on different metrics. -Enable all metrics, including metrics turned off by default -++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +Turn on all metrics, including metrics turned off by default +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml networkExplorer: reducer: disableMetrics: - none -Disable entire metric categories +Turn off entire metric categories ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: yaml @@ -414,11 +413,10 @@ Disable entire metric categories - dns.all - http.all - -Disable an individual TCP metric +Turn off an individual TCP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml networkExplorer: reducer: @@ -434,10 +432,10 @@ Disable an individual TCP metric - tcp.resets -Disable an individual UDP metric +Turn off an individual UDP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml networkExplorer: reducer: @@ -447,10 +445,10 @@ Disable an individual UDP metric - udp.active - udp.drops -Disable an individual DNS metric +Turn off an individual DNS metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml networkExplorer: reducer: @@ -461,7 +459,7 @@ Disable an individual DNS metric - dns.responses - dns.timeouts -Disable an individual HTTP metric +Turn off an individual HTTP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: yaml @@ -474,7 +472,7 @@ Disable an individual HTTP metric - http.active_sockets - http.status_code -Disable an internal metric +Turn off an internal metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: yaml @@ -492,7 +490,7 @@ Disable an internal metric .. note:: This list represents the set of internal metrics which are enabled by default. -Enable entire metric categories +Turn on entire metric categories ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: yaml @@ -506,7 +504,7 @@ Enable entire metric categories - http.all - ebpf_net.all -Enable an individual TCP metric +Turn on an individual TCP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: yaml @@ -524,10 +522,10 @@ Enable an individual TCP metric - tcp.new_sockets - tcp.resets -Enable an individual UDP metric +Turn on an individual UDP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml networkExplorer: reducer: @@ -537,10 +535,10 @@ Enable an individual UDP metric - udp.active - udp.drops -Enable an individual DNS metric +Turn on an individual DNS metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml networkExplorer: reducer: @@ -551,7 +549,7 @@ Enable an individual DNS metric - dns.responses - dns.timeouts -Enable an individual HTTP metric +Turn on an individual HTTP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: yaml @@ -564,7 +562,7 @@ Enable an individual HTTP metric - http.active_sockets - http.status_code -Enable an internal metric +Turn on an internal metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: yaml @@ -572,7 +570,7 @@ Enable an internal metric networkExplorer: reducer: enableMetrics: - - ebpf_net.span_utilization_fraction + - ebpf_net.span_utilization_fraction - ebpf_net.pipeline_metric_bytes_discarded - ebpf_net.codetiming_min_ns - ebpf_net.entrypoint_info @@ -595,9 +593,9 @@ In the following example, all HTTP metrics along with certain individual TCP and - tcp.new_sockets - tcp.resets - udp.bytes - - udp.packets + - udp.packets -In the following example, all HTTP metrics along with certain individual internal metrics are enabled. +In the following example, all HTTP metrics along with certain individual internal metrics are turned on. .. note:: The ``disableMetrics`` flag is evaluated before the ``enableMetrics`` flag. From 444377672abd6fee3a9a08a456e337172701b666 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 23 Nov 2023 10:30:07 +0100 Subject: [PATCH 05/12] Requirements --- .../network-explorer-setup.rst | 43 ++++++++++--------- 1 file changed, 23 insertions(+), 20 deletions(-) diff --git a/infrastructure/network-explorer/network-explorer-setup.rst b/infrastructure/network-explorer/network-explorer-setup.rst index ab3dd47e9..165141a17 100644 --- a/infrastructure/network-explorer/network-explorer-setup.rst +++ b/infrastructure/network-explorer/network-explorer-setup.rst @@ -7,7 +7,7 @@ Set up Network Explorer ******************************************************* .. meta:: - :description: Install and configure Network Explorer on Kubernetes systems + :description: Install and configure Network Explorer on Kubernetes systems using the OpenTelemetry eBPF Helm chart. .. note:: The following topic only applies to Kubernetes systems. If you want to set up Network Explorer on other systems, see :ref:`network-explorer-setup-non-k8s`. @@ -18,26 +18,29 @@ Prerequisites To use Network Explorer with Kubernetes, you must meet the following requirements. - .. list-table:: - :header-rows: 1 - :widths: 30 70 +.. list-table:: + :header-rows: 1 + :widths: 30 70 - * - :strong:`Prerequisite` - - :strong:`Description` - - * - Environment - - Network Explorer is supported in Kubernetes-based environments on Linux hosts. Use Helm-based management. - - * - Operating system - - * Linux kernel versions 3.10 to 3.19, 4.0 to 4.20, and 5.0 to 5.19, unless explicitly not allowed. Versions 4.15.0, 4.19.57, and 5.1.16 are not supported - * RedHat Linux versions 7.6 or higher - * Ubuntu versions 16.04 or higher - * Debian Stretch+ - * Amazon Linux 2 - * Google COS - - * - Kubernetes version - - Network Explorer is supported on all active releases of Kubernetes. For more information, see :new-page:`Releases ` in the Kubernetes documentation. + * - :strong:`Prerequisite` + - :strong:`Description` + + * - Environment + - Network Explorer is supported in Kubernetes-based environments on Linux hosts. Use Helm-based management. + + * - Operating system + - * Linux kernel versions 3.10 to 3.19, 4.0 to 4.20, and 5.0 to 5.19, unless explicitly not allowed. Versions 4.15.0, 4.19.57, and 5.1.16 are not supported + * Red Hat Linux versions 7.6 or higher + * Ubuntu versions 16.04 or higher + * Debian Stretch+ + * Amazon Linux 2 + * Google COS + + * - Kubernetes version + - Network Explorer requires Kubernetes 1.24 or higher. For more information, see :new-page:`Releases ` in the Kubernetes documentation. + + * - Helm version + - Network Explorer requires Helm version 3.9 or higher. .. note:: Network Explorer is not compatible with GKE Autopilot clusters. From 15234ee9109c220d35d3a9d546e7405c8c172b8d Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 23 Nov 2023 11:25:44 +0100 Subject: [PATCH 06/12] Install and examples --- gdi/opentelemetry/install-k8s.rst | 70 +-- .../kubernetes-config-advanced.rst | 2 +- .../network-explorer-setup-non-k8s.rst | 24 +- .../network-explorer-setup.rst | 533 +++++++++--------- 4 files changed, 324 insertions(+), 305 deletions(-) diff --git a/gdi/opentelemetry/install-k8s.rst b/gdi/opentelemetry/install-k8s.rst index 83504a99e..15c180545 100644 --- a/gdi/opentelemetry/install-k8s.rst +++ b/gdi/opentelemetry/install-k8s.rst @@ -38,10 +38,10 @@ The Helm chart works with default configurations of the main Kubernetes distribu * :new-page:`Google Kubernetes Engine ` * :new-page:`Red Hat OpenShift ` * Minikube. This distribution was made for local developers and is not meant to be used in production. - - Minikube was created to spin up various past versions of Kubernetes. - - Minikube versions don't necessarily align with Kubernetes versions. For example, the :new-page:`Minikube v1.27.1 releases notes ` state the default Kubernetes version was bumped to v1.25.2. + - Minikube was created to spin up various past versions of Kubernetes. + - Minikube versions don't necessarily align with Kubernetes versions. For example, the :new-page:`Minikube v1.27.1 releases notes ` state the default Kubernetes version was bumped to v1.25.2. -While the chart should work for other Kubernetes distributions, the :new-page:`values.yaml ` configuration file could require additional updates. +While the chart should work for other Kubernetes distributions, the :new-page:`values.yaml ` configuration file could require additional updates. .. _helm-chart-components: @@ -57,19 +57,19 @@ Agent component The agent component consists of the following config files: -* daemonset.yaml +* daemonset.yaml * Defines a DaemonSet to ensure that some (or all) nodes in the cluster run a copy of the agent pod. * Collects data from each node in the Kubernetes cluster. -* configmap-agent.yaml +* configmap-agent.yaml * Provides configuration data to the agent component. - * Contains details about how the agent should collect and forward data. + * Contains details about how the agent collects and forwards data. * service-agent.yaml (optional) - * Defines a Kubernetes Service for the agent. + * Defines a Kubernetes Service for the agent. * Used for internal communication within the cluster or for exposing specific metrics or health endpoints. Cluster receiver component @@ -85,7 +85,7 @@ The cluster receiver component consists of the following config files: * configmap-cluster-receiver.yaml * Provides configuration data to the cluster receiver. - * Contains details about how the receiver should process and forward the data it collects. + * Contains details about how the receiver processes and forwards the data it collects. * pdb-cluster-receiver.yaml @@ -110,7 +110,7 @@ The gateway component consists of the following config files: * configmap-gateway.yaml * Provides configuration data to the gateway. - * Contains details about how the gateway should process, transform, and forward the data it receives. + * Contains details about how the gateway processes, transforms, and forwards the data it receives. * service.yaml @@ -122,7 +122,7 @@ The gateway component consists of the following config files: * Defines a Pod Disruption Budget (PDB) for the gateway. * Ensures that a certain number or percentage of replicas of the gateway remain available during voluntary disruptions. -Prerequisites +Prerequisites ------------------------------------------------ You need the following resources to use the chart: @@ -132,30 +132,30 @@ You need the following resources to use the chart: .. _collector-k8s-destination: -Prerequisites: Destination +Prerequisites: Destination ------------------------------------------------ -The Collector for Kubernetes requires a destination: Splunk Enterprise or Splunk Cloud (``splunkPlatform``) or Splunk Observability Cloud (``splunkObservability``). +The Collector for Kubernetes requires a destination: Splunk Enterprise or Splunk Cloud Platform (``splunkPlatform``) or Splunk Observability Cloud (``splunkObservability``). Depending on your destination, you need: * To send data to ``splunkPlatform``: - * Splunk Enterprise 8.0 or later. + * Splunk Enterprise 8.0 or higher. * A minimum of one Splunk platform index ready to collect the log data. This index is used for ingesting logs. * An HTTP Event Collector (HEC) token and endpoint. See :new-page:`https://docs.splunk.com/Documentation/Splunk/8.2.0/Data/UsetheHTTPEventCollector ` and :new-page:`https://docs.splunk.com/Documentation/Splunk/8.2.0/Data/ScaleHTTPEventCollector `. * ``splunkPlatform.endpoint``. URL to a Splunk instance, for example: ``"http://localhost:8088/services/collector"``. * ``splunkPlatform.token``. Splunk HTTP Event Collector token. * To send data to ``splunkObservability``: - + * ``splunkObservability.accessToken``. Your Splunk Observability org access token. See :ref:`admin-org-tokens`. * ``splunkObservability.realm``. Splunk realm to send telemetry data to. The default is ``us0``. See :new-page:`realms `. Deploy the Helm chart -------------------------------- -Run the following commands to deploy the Helm chart: +Run the following commands to deploy the Helm chart: #. Add the Helm repo: @@ -163,33 +163,33 @@ Run the following commands to deploy the Helm chart: helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart -#. Determine your destination. +#. Determine your destination. - For Splunk Observability Cloud: + For Splunk Observability Cloud: .. code-block:: bash helm install my-splunk-otel-collector --set="splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector - For Splunk Enterprise or Splunk Cloud: + For Splunk Enterprise or Splunk Cloud Platform: .. code-block:: bash helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector - For both Splunk Observability Cloud and Splunk Enterprise or Splunk Cloud: + For both Splunk Observability Cloud and Splunk Enterprise or Splunk Cloud Platform: .. code-block:: bash helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector -#. Specify a namespace to deploy the chart to with the ``-n`` argument: +#. Specify a namespace to deploy the chart to with the ``-n`` argument: .. code-block:: bash helm -n otel install my-splunk-otel-collector -f values.yaml splunk-otel-collector-chart/splunk-otel-collector -.. caution:: +.. caution:: The :new-page:`values.yaml ` file lists all supported configurable parameters for the Helm chart, along with a detailed explanation of each parameter. :strong:`Review it to understand how to configure this chart`. @@ -208,10 +208,10 @@ For example: .. code-block:: bash helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart - helm install my-splunk-otel-collector --set="splunkRealm=us0,splunkAccessToken=xxxxxx,clusterName=my-cluster" --set=distribution={value},cloudProvider={value} splunk-otel-collector-chart/splunk-otel-collector + helm install my-splunk-otel-collector --set="splunkRealm=us0,splunkAccessToken=xxxxxx,clusterName=my-cluster" --set=distribution={value},cloudProvider={value} splunk-otel-collector-chart/splunk-otel-collector -* Read more about :ref:`otel-kubernetes-config` and also :ref:`the advanced Kubernetes config `. -* See :new-page:`examples of Helm chart configuration ` for additional chart installation examples or upgrade commands to change the default behavior. +* Read more about :ref:`otel-kubernetes-config` and also :ref:`the advanced Kubernetes config `. +* See :new-page:`examples of Helm chart configuration ` for additional chart installation examples or upgrade commands to change the default behavior. * For logs, see :ref:`otel-kubernetes-config-logs`. Set Helm using a YAML file @@ -225,16 +225,16 @@ You can also set Helm values as arguments using a YAML file. For example, after See :new-page:`an example of a YAML file in GitHub `. Options include: -* Set ``isWindows`` to ``true`` to apply the Kubernetes cluster with Windows worker nodes. -* Set ``networkExplorer.enabled`` to ``true`` to use the default values for :ref:`splunk-otel-network-explorer `. +* Set ``isWindows`` to ``true`` to apply the Kubernetes cluster with Windows worker nodes. + Set Prometheus metrics ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Set the Collector to automatically scrape any pod emitting Prometheus by adding this property to the Helm chart's values YAML: +Set the Collector to automatically scrape any pod emitting Prometheus by adding this property to the Helm chart's values YAML: .. code-block:: bash - + autodetect: prometheus: true @@ -263,7 +263,7 @@ Install the Collector with resource YAML manifests To specify the configuration, you at least need to know your Splunk realm and base64-encoded access token. -A configuration file can contain multiple resource manifests. Each manifest applies a specific state to a Kubernetes object. The manifests must be configured for Splunk Observability Cloud only and come with all telemetry types activated for the agent, which is the default when installing the Helm chart. +A configuration file can contain multiple resource manifests. Each manifest applies a specific state to a Kubernetes object. The manifests must be configured for Splunk Observability Cloud only and come with all telemetry types activated for the agent, which is the default when installing the Helm chart. Determine which manifest you want to use ------------------------------------------------ @@ -278,7 +278,7 @@ Update the manifest Once you've decided which manifest suits you better, make the following updates: #. In the secret.yaml manifest, update the ``splunk_observability_access_token`` data field with your base64-encoded access token. -#. Update any configmap-agent.yaml, configmap-gateway.yaml, and configmap-cluster-receiver.yaml manifest files you're going to use. Search for "CHANGEME" to find the values that must be updated to use the rendered manifests directly. +#. Update any configmap-agent.yaml, configmap-gateway.yaml, and configmap-cluster-receiver.yaml manifest files you use. Search for "CHANGEME" to find the values that must be updated to use the rendered manifests directly. #. You need to update "CHANGEME" in exporter configurations to the value of the Splunk realm. #. You need to update "CHANGEME" in attribute processor configurations to the value of the cluster name. @@ -302,17 +302,17 @@ For data forwarding (gateway) mode, download the :new-page:`gateway-only manifes Use templates -------------------------------- -You can create your own manifest YAML files with customized parameters using ``helm template`` command. +You can create your own manifest YAML files with customized parameters using ``helm template`` command. .. code-block:: bash - helm template --namespace default --set cloudProvider='aws' --set distribution='openshift' --set splunkObservability.accessToken='KUwtoXXXXXXXX' --set clusterName='my-openshift-EKS-dev-cluster' --set splunkObservability.realm='us1' --set gateway.enabled='false' --output-dir --generate-name splunk-otel-collector-chart/splunk-otel-collector + helm template --namespace default --set cloudProvider='aws' --set distribution='openshift' --set splunkObservability.accessToken='KUwtoXXXXXXXX' --set clusterName='my-openshift-EKS-dev-cluster' --set splunkObservability.realm='us1' --set gateway.enabled='false' --output-dir --generate-name splunk-otel-collector-chart/splunk-otel-collector If you prefer, you can update the values.yaml file first. .. code-block:: bash - helm template --namespace default --values values.yaml --output-dir --generate-name splunk-otel-collector-chart/splunk-otel-collector + helm template --namespace default --values values.yaml --output-dir --generate-name splunk-otel-collector-chart/splunk-otel-collector Manifest files will be created in your specified folder ````. @@ -322,7 +322,7 @@ Manifest examples See the following manifest to set security constraints: .. github:: yaml - :url: https://raw.githubusercontent.com/signalfx/splunk-otel-collector-chart/main/examples/distribution-openshift/rendered_manifests/securityContextConstraints.yaml + :url: https://raw.githubusercontent.com/signalfx/splunk-otel-collector-chart/main/examples/distribution-openshift/rendered_manifests/securityContextConstraints.yaml .. _k8s-operator: @@ -332,7 +332,7 @@ Use the Kubernetes Operator in OpenTelemetry You can install the Collector with an upstream Kubernetes Operator for Auto Instrumentation. This instance of the Kubernetes Operator is part of the upstream OpenTelemetry Operator project. See the :new-page:`OpenTelemetry GitHub repo ` for more information. -.. note:: The upstream Kubernetes Operator is not related to the Splunk Operator for Kubernetes, which is used to deploy and operate Splunk Enterprise deployments in a Kubernetes infrastructure. +.. note:: The upstream Kubernetes Operator is not related to the Splunk Operator for Kubernetes, which is used to deploy and operate Splunk Enterprise deployments in a Kubernetes infrastructure. Splunk Distribution for the Kubernetes Operator (Alpha) -------------------------------------------------------- diff --git a/gdi/opentelemetry/kubernetes-config-advanced.rst b/gdi/opentelemetry/kubernetes-config-advanced.rst index 37a69424b..ad6536d2a 100644 --- a/gdi/opentelemetry/kubernetes-config-advanced.rst +++ b/gdi/opentelemetry/kubernetes-config-advanced.rst @@ -146,7 +146,7 @@ You can collect network metrics and analyze them in Network Explorer using the O To install and configure the eBPF Helm chart, see :ref:`ebpf-chart-setup`. -.. note:: The ``networkExplorer`` setting of the Splunk OpenTelemetry Collector Helm chart is deprecated. For instructions on how to migrate from the ``networkExplorer`` setting to the eBPF Helm chart, see :ref:`ebpf-chart-migrate`. +.. note:: Starting from version 0.88 of the Helm chart, the ``networkExplorer`` setting of the Splunk OpenTelemetry Collector Helm chart is deprecated. For instructions on how to migrate from the ``networkExplorer`` setting to the eBPF Helm chart, see :ref:`ebpf-chart-migrate`. Prerequisites ----------------------------------------------------------------------------- diff --git a/infrastructure/network-explorer/network-explorer-setup-non-k8s.rst b/infrastructure/network-explorer/network-explorer-setup-non-k8s.rst index 23e46a90f..3585f3a54 100644 --- a/infrastructure/network-explorer/network-explorer-setup-non-k8s.rst +++ b/infrastructure/network-explorer/network-explorer-setup-non-k8s.rst @@ -12,7 +12,7 @@ To use Network Explorer on non-Kubernetes systems, you must install the Extended Install the eBPF collector ============================== -Follow these steps to install and configure the eBPF collector on non-Kubernetes systems: +Follow these steps to install and configure the eBPF collector on non-Kubernetes systems: #. Download the eBPF packages from the :new-page:`GitHub releases page `. #. Run the following commands to install the reducer, the kernel collector, and the cloud collector components. @@ -63,9 +63,9 @@ Follow these steps to install and configure the eBPF collector on non-Kubernetes * - :strong:`Parameter` - :strong:`Value` * - ``prom_bind`` - - IP address and port number on which Prometheus will scrape + - IP address and port number on which Prometheus scrapes metrics * - ``disable_prometheus_metrics`` - - ``false`` + - ``false`` * If you use the cloud collector, set ``enable_aws_enrichment`` to ``true``. @@ -73,7 +73,7 @@ Follow these steps to install and configure the eBPF collector on non-Kubernetes .. tabs:: - .. code-tab:: bash Start command + .. code-tab:: bash Start command systemctl start reducer @@ -90,15 +90,15 @@ Follow these steps to install and configure the eBPF collector on non-Kubernetes * - :strong:`Parameter` - :strong:`Value` * - Intake host - - IP address or hostname where the reducer is running - * - Intake port + - IP address or host name where the reducer is running + * - Intake port - Same value as ``telemetry_port`` in the reducer.yaml file #. Run the following command to start or restart the kernel collector to apply the changes. .. tabs:: - .. code-tab:: bash Start command + .. code-tab:: bash Start command systemctl start kernel-collector @@ -115,15 +115,15 @@ Follow these steps to install and configure the eBPF collector on non-Kubernetes * - :strong:`Parameter` - :strong:`Value` * - Intake host - - IP address or hostname where the reducer is running - * - Intake port + - IP address or host name where the reducer is running + * - Intake port - Same value as ``telemetry_port`` in the reducer.yaml file #. Run the following command to start or restart the cloud collector to apply the changes. .. tabs:: - .. code-tab:: bash Start command + .. code-tab:: bash Start command systemctl start cloud-collector @@ -134,11 +134,11 @@ Follow these steps to install and configure the eBPF collector on non-Kubernetes Next steps ==================================== -Once you set up Network Explorer, you can start monitoring network telemetry metrics coming into your Splunk Infrastructure Monitoring platform using one or more of the following options: +Once you set up Network Explorer, you can start monitoring network telemetry metrics coming into your Splunk Infrastructure Monitoring platform using 1 or more of the following options: - Built-in Network Explorer navigators. To see the Network Explorer navigators, follow these steps: - #. From the Splunk Observability Cloud home page, select :strong:`Infrastructure` on the left navigator. + #. From the Splunk Observability Cloud home page, select :strong:`Infrastructure` on the navigator. #. Select :strong:`Network Explorer`. .. image:: /_images/images-network-explorer/network-explorer-navigators.png diff --git a/infrastructure/network-explorer/network-explorer-setup.rst b/infrastructure/network-explorer/network-explorer-setup.rst index 165141a17..d32fbecb8 100644 --- a/infrastructure/network-explorer/network-explorer-setup.rst +++ b/infrastructure/network-explorer/network-explorer-setup.rst @@ -1,17 +1,16 @@ - - .. _network-explorer-setup: ******************************************************* -Set up Network Explorer +Set up Network Explorer in Kubernetes ******************************************************* .. meta:: - :description: Install and configure Network Explorer on Kubernetes systems using the OpenTelemetry eBPF Helm chart. + :description: Install and configure Network Explorer on Kubernetes systems using the OpenTelemetry Collector eBPF Helm chart. + +You can install and configure Network Explorer as part of the Splunk Distribution of OpenTelemetry Collector Helm chart. You also need the OpenTelemetry Collector eBPF Helm chart. -.. note:: The following topic only applies to Kubernetes systems. If you want to set up Network Explorer on other systems, see :ref:`network-explorer-setup-non-k8s`. +To install Network Explorer in systems not using Kubernetes, see :ref:`network-explorer-setup-non-k8s`. -You can install and configure Network Explorer as part of the Splunk Distribution of OpenTelemetry Collector Helm chart. Prerequisites ============================== @@ -38,7 +37,7 @@ To use Network Explorer with Kubernetes, you must meet the following requirement * - Kubernetes version - Network Explorer requires Kubernetes 1.24 or higher. For more information, see :new-page:`Releases ` in the Kubernetes documentation. - + * - Helm version - Network Explorer requires Helm version 3.9 or higher. @@ -63,7 +62,7 @@ To use Network Explorer with OpenShift, you must meet the following requirements Network Explorer components ================================= -Network Explorer consists of the following components: +The Helm chart for Network Explorer consists of the following components: .. list-table:: :header-rows: 1 @@ -76,36 +75,41 @@ Network Explorer consists of the following components: * - The reducer - The reducer takes the data points collected by the collectors and reduces them to actual metric time series (MTS). The reducer also connects to the Splunk Distribution of OpenTelemetry Collector on the OTLP gRPC port. - - Yes. Install and configure at least one instance of the reducer. + - Yes. Install and configure at least 1 instance of the reducer. - Yes * - The kernel collector - - The Extended Berkeley Packet Filter (eBPF) agent responsible for gathering data points from the kernel. + - The Extended Berkeley Packet Filter (eBPF) agent responsible for gathering data points from the kernel. - Yes. Install and configure the kernel collector on each of your hosts. - Yes * - The Kubernetes collector - The Kubernetes collector further enriches collected data points with additional metadata. - No. If you want to get additional metadata, install and configure at least one instance of the Kubernetes collector on each Kubernetes cluster. - - Yes. If you want to disable the Kubernetes collector, set ``k8sCollector.enabled`` to ``false``. + - Yes. If you want to turn off the Kubernetes collector, set ``k8sCollector.enabled`` to ``false``. * - The cloud collector - The cloud collector further enriches collected data points with additional metadata. - - No. If your Kubernetes is hosted by, or installed within, AWS, and you want to get additional metadata, install and configure at least one instance of the cloud collector. - - No. If you want to enable the cloud collector, set ``cloudCollector.enabled`` to ``true``. + - No. If your Kubernetes is hosted by, or installed within, AWS, and you want to get additional metadata, install and configure at least 1 instance of the cloud collector. + - No. If you want to turn on the cloud collector, set ``cloudCollector.enabled`` to ``true``. .. _install-network-explorer: Install Network Explorer -======================================================================================= +================================================== + +To collect and send network data to Network Explorer, you need to install two separate Helm charts, the Splunk OpenTelemetry Collector Helm chart and the OpenTelemetry Collector eBPF Helm chart. -For the Splunk Distribution of OpenTelemetry Collector to work with Network Explorer, you must install it in data forwarding (gateway) mode, and follow these steps: +Install the Collector Helm chart +---------------------------------------------------------- + +For the Splunk Distribution of OpenTelemetry Collector to work with Network Explorer, you must install it in data forwarding (Gateway) mode and with the following settings: - Turn on OTLP gRPC reception by configuring an OTLP gRPC metric receiver on the Gateway. - Turn on SignalFx export by configuring a SignalFx exporter on the Gateway with the valid realm and access token. -The OTLP gRPC metric receiver and SignalFx exporter are already configured in the Helm chart for the Splunk Distribution of OpenTelemetry Collector, so if you use the Helm chart method to install the Splunk Distribution of OpenTelemetry Collector, you don't need to configure these requirements separately. +The OTLP gRPC metric receiver and SignalFx exporter are already configured in the Helm chart for the Splunk Distribution of OpenTelemetry Collector, so if you use the Helm chart method to install the Splunk Distribution of OpenTelemetry Collector, you don't need to configure these requirements separately. See :ref:`otel-install-k8s` for detailed instructions. The following table shows required parameters for this installation: @@ -116,16 +120,16 @@ The following table shows required parameters for this installation: * - :strong:`Parameter` - :strong:`Description` + * - ``gateway`` + - Activates data forwarding (Gateway) mode, which is required by Network Explorer. * - ``namespace`` - - The Kubernetes namespace to install into. This value must match the value for the namespace of the Network Explorer. + - Kubernetes namespace to install into. This value must match the value for the namespace of the Network Explorer. * - ``splunkObservability.realm`` - Splunk realm to send telemetry data to. For example, ``us0``. * - ``splunkObservability.accessToken`` - - The access token for your organization. An access token with ingest scope is sufficient. For more information, see :ref:`admin-org-tokens`. + - Access token for your organization. An access token with ingest scope is sufficient. For more information, see :ref:`admin-org-tokens`. * - ``clusterName`` - An arbitrary value that identifies your Kubernetes cluster. - * - ``networkExplorer.enabled`` - - Set this to ``true`` to activate Network Explorer. * - ``agent.enabled`` - * If you are adding Network Explorer to an existing Splunk Distribution of OpenTelemetry Collector configuration, leave ``agent.enabled`` as is. * If you are installing a new instance of the Splunk Distribution of OpenTelemetry Collector and only want to collect telemetry from Network Explorer, set this to ``false`` to turn off installing the Splunk Distribution of OpenTelemetry Collector in host monitoring (agent) mode on each Kubernetes node. @@ -137,27 +141,55 @@ The following table shows required parameters for this installation: * - ``gateway.replicaCount`` - Set this to ``1`` since Network Explorer doesn't support communication to multiple gateway replicas. +.. note:: Starting from version 0.88 of the Helm chart, the ``networkExplorer`` setting of the Splunk OpenTelemetry Collector Helm chart is deprecated. For instructions on how to migrate from the ``networkExplorer`` setting to the eBPF Helm chart, see :ref:`ebpf-chart-migrate`. -Example: Install Network Explorer for Kubernetes +.. _ebpf-chart-setup: + +Install the eBPF Helm chart ---------------------------------------------------------- -In this example, the reducer, the kernel collector, and the Kubernetes collector are configured. The cloud collector isn't turned on. +After you've deployed the Splunk Distribution of OpenTelemetry Collector using the Helm chart, add the OpenTelemetry eBPF Helm chart by running these commands: -Follow these steps to install Network Explorer using the Helm chart method: +.. code-block:: shell -#. Run the following command to deploy the Helm chart. + helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts + helm repo update open-telemetry + helm install my-opentelemetry-ebpf -f ./otel-ebpf-values.yaml open-telemetry/opentelemetry-ebpf - .. code-block:: bash +Make sure that the otel-ebpf-values.yaml file has the ``endpoint.address`` option set to the Splunk OpenTelemetry Collector gateway service name. You can get the service name by running the following command: - helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart +.. code-block:: shell -#. Run the following command to update the Helm chart. + kubectl get svc | grep splunk-otel-collector-gateway - .. code-block:: bash +The OpenTelemetry Collector eBPF Helm chart requires kernel headers to run the kernel in each Kubernetes node. The kernel collector installs the headers automatically unless your nodes don't have access to the internet. - helm repo update + If you need to install the required packages manually, run the following command: -#. Run the following command to install the Splunk Distribution of OpenTelemetry Collector. Replace the parameters with their appropriate values. + .. tabs:: + + .. code-tab:: bash Debian + + sudo apt-get install --yes linux-headers-$(uname -r) + + .. code-tab:: bash RedHat Linux/Amazon Linux + + sudo yum install -y kernel-devel-$(uname -r) + + +Example: Install Network Explorer for Kubernetes +---------------------------------------------------------- + +In this example, the reducer, the kernel collector, and the Kubernetes collector are configured together with the OpenTelemetry Collector eBPF Helm chart. The cloud collector isn't turned on. + +#. Deploy and update the Splunk OpenTelemetry Collector Helm chart: + + .. code-block:: shell + + helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart + helm repo update + +#. Install the Splunk Distribution of OpenTelemetry Collector. Replace the parameters with their appropriate values: .. tabs:: @@ -167,12 +199,11 @@ Follow these steps to install Network Explorer using the Helm chart method: --set="splunkObservability.realm=" \ --set="splunkObservability.accessToken=" \ --set="clusterName=" \ - --set="networkExplorer.enabled=true" \ --set="agent.enabled=false" \ --set="clusterReceiver.enabled=false" \ --set="gateway.replicaCount=1" \ splunk-otel-collector-chart/splunk-otel-collector - + .. code-tab:: bash Collect Network Explorer and other telemetry helm --namespace= install splunk-otel-collector \ @@ -181,8 +212,6 @@ Follow these steps to install Network Explorer using the Helm chart method: --set="clusterName=" \ --set="splunkObservability.logsEnabled=true" \ --set="splunkObservability.infrastructureMonitoringEventsEnabled=true" \ - --set="networkExplorer.enabled=true" \ - --set="networkExplorer.podSecurityPolicy.enabled=false" \ --set="agent.enabled=true" \ --set="clusterReceiver.enabled=true" \ --set="gateway.replicaCount=1" \ @@ -191,20 +220,20 @@ Follow these steps to install Network Explorer using the Helm chart method: --set="gateway.resources.limits.memory=1Gi" \ splunk-otel-collector-chart/splunk-otel-collector +#. Deploy and update the OpenTelemetry Collector eBPF Helm chart: -#. (Optional) The Network Explorer kernel collector requires kernel headers to run the kernel in each Kubernetes node. The kernel collector installs the headers automatically unless your nodes don't have access to the internet. - - If you need to install the required packages manually, run the following command: - - .. tabs:: + .. code-block:: shell - .. code-tab:: bash Debian + helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts + helm repo update - sudo apt-get install --yes linux-headers-$(uname -r) +#. Install the Splunk Distribution of OpenTelemetry Collector. Replace the parameters with their appropriate values: - .. code-tab:: bash RedHat Linux/Amazon Linux + .. code-block:: shell - sudo yum install -y kernel-devel-$(uname -r) + helm --namespace= install my-opentelemetry-ebpf \ + --set="endpoint.address=" \ + open-telemetry/opentelemetry-ebpf For additional Splunk Distribution of OpenTelemetry Collector configuration, see :ref:`otel-install-k8s`. @@ -212,9 +241,9 @@ For additional Splunk Distribution of OpenTelemetry Collector configuration, see Example: Install Network Explorer for OpenShift ---------------------------------------------------------- -Follow these steps to install Network Explorer for OpenShift: +In this example, each node of an OpenShift cluster runs on Red Hat Enterprise Linux CoreOS, which has SELinux activated by default. To install the Network Explorer kernel collector, you have to configure Super-Privileged Container (SPC) for SELinux. Follow these steps to install Network Explorer: -#. Each node of an OpenShift cluster runs on Red Hat Enterprise Linux CoreOS, which has SELinux enabled by default. To install the Network Explorer kernel collector, you have to configure Super-Privileged Container (SPC) for SELinux. Run the following script to modify the SELinux SPC policy to allow additional access to ``spc_t`` domain processes. +#. Run the following script to modify the SELinux SPC policy to allow additional access to ``spc_t`` domain processes: .. code-block:: bash @@ -255,30 +284,43 @@ Follow these steps to install Network Explorer for OpenShift: --set="splunkObservability.accessToken=" \ --set="distribution=openshift" \ --set="clusterName=" \ - --set="networkExplorer.enabled=true" \ --set="agent.enabled=true" \ --set="clusterReceiver.enabled=true" \ --set="gateway.replicaCount=1" \ - --set="networkExplorer.podSecurityPolicy.enabled=false" \ - --set="networkExplorer.rbac.create=true" \ - --set="networkExplorer.k8sCollector.serviceAccount.create=true" \ - --set="networkExplorer.kernelCollector.serviceAccount.create=true" \ - --set="networkExplorer.kernelCollector.image.tag=4.18.0-372.51.1.el8_6.x86_64" \ - --set="networkExplorer.kernelCollector.image.repository=quay.io/splunko11ytest/network-explorer-debug" \ - --set="networkExplorer.kernelCollector.image.name=kernel-collector-openshift" \ splunk-otel-collector-chart/splunk-otel-collector -#. The Network Explorer kernel collector pods need privileged access to function. Run the following command to configure privileged access for the kernel collector pods. +#. Deploy and update the OpenTelemetry Collector eBPF Helm chart: - .. code-block:: bash + .. code-block:: shell + + helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts + helm repo update + +#. Install the Splunk Distribution of OpenTelemetry Collector. Replace the parameters with their appropriate values: + + .. code-block:: shell + + helm --namespace= install my-opentelemetry-ebpf \ + --set="endpoint.address=" \ + --set="podSecurityPolicy.enabled=false" \ + --set="rbac.create=true" \ + --set="k8sCollector.serviceAccount.create=true" \ + --set="kernelCollector.serviceAccount.create=true" \ + --set="kernelCollector.image.tag=4.18.0-372.51.1.el8_6.x86_64" \ + --set="kernelCollector.image.name=kernel-collector-openshift" \ + open-telemetry/opentelemetry-ebpf - oc adm policy add-scc-to-user privileged -z my-splunk-otel-collector-kernel-collector -n +#. The kernel collector pods need privileged access to function. Run the following command to configure privileged access for the kernel collector pods. + + .. code-block:: bash + + oc adm policy add-scc-to-user privileged -z my-opentelemetry-ebpf -n #. Run the following command to update the default security context constraints (SCC) for your OpenShift cluster, so that images are not forced to run as a pre-allocated User Identifier, without granting everyone access to the privileged SCC. - .. code-block:: bash + .. code-block:: bash - oc adm policy add-scc-to-user anyuid -z my-splunk-otel-collector-k8s-collector -n + oc adm policy add-scc-to-user anyuid -z my-opentelemetry-ebpf -n .. _resize-otel-installation: @@ -286,33 +328,33 @@ Change the resource footprint of Splunk Distribution of OpenTelemetry Collector ================================================================================== Each Kubernetes node has a Splunk Distribution of OpenTelemetry Collector, so you might want to adjust your resources depending on the number of Kubernetes nodes you have. - - You can update the :new-page:`Splunk Distribution of OpenTelemetry Collector values file `, or specify different values during installation. - - These are the default resource configurations. - .. code-block:: yaml +You can update the :new-page:`Splunk Distribution of OpenTelemetry Collector values file `, or specify different values during installation. - resources: - limits: - cpu: 4 - memory: 8Gi +These are the default resource configurations: - Use the following approximations to determine your resource needs. +.. code-block:: yaml - .. list-table:: - :header-rows: 1 - :widths: 50 50 + resources: + limits: + cpu: 4 + memory: 8Gi - * - :strong:`Approximation` - - :strong:`Resource needs` - - * - Up to 500 nodes/5,000 data points per second - - CPU: 500m, memory: 1 Gi - * - Up to 1,000 nodes/10,000 data points per second - - CPU: 1, memory: 2 Gi - * - Up to 2,000 nodes/20,000 data points per second - - CPU: 2, memory: 4 Gi +Use the following approximations to determine your resource needs. + +.. list-table:: + :header-rows: 1 + :widths: 50 50 + + * - :strong:`Approximation` + - :strong:`Resource needs` + + * - Up to 500 nodes/5,000 data points per second + - CPU: 500m, memory: 1 Gi + * - Up to 1,000 nodes/10,000 data points per second + - CPU: 1, memory: 2 Gi + * - Up to 2,000 nodes/20,000 data points per second + - CPU: 2, memory: 4 Gi Example @@ -331,7 +373,7 @@ In the following example, CPU is set to :strong:`500m`, and memory is set to :st .. code-tab:: bash Pass arguments during installation - helm --namespace= install my-splunk-otel-collector --set="splunkObservability.realm=,splunkObservability.accessToken=,clusterName=,agent.enabled=false,clusterReceiver.enabled=false,networkExplorer.enabled=true,gateway.replicaCount=1,gateway.resources.limits.cpu=500m,gateway.resources.limits.memory=1Gi" splunk-otel-collector-chart/splunk-otel-collector + helm --namespace= install my-splunk-otel-collector --set="splunkObservability.realm=,splunkObservability.accessToken=,clusterName=,agent.enabled=false,clusterReceiver.enabled=false,gateway.replicaCount=1,gateway.resources.limits.cpu=500m,gateway.resources.limits.memory=1Gi" splunk-otel-collector-chart/splunk-otel-collector .. _resize-installation: @@ -347,33 +389,26 @@ The reducer is a single pod per Kubernetes cluster. If your cluster contains a l The reducer processes telemetry in multiple stages, with each stage partitioned into one or more shards, where each shard is a separate thread. Increasing the number of shards in each stage expands the capacity of the reducer. -Change the following parameters in the :new-page:`Splunk Distribution of OpenTelemetry Collector values file ` to increase or decrease the number of shards per reducer stage. You can set between 1-32 shards. +Change the following parameters in the :new-page:`OpenTelemetry Collector eBPF values file ` to increase or decrease the number of shards per reducer stage. You can set between 1-32 shards. The default configuration is 1 shard per reducer stage. - .. code-block:: yaml + .. code-block:: yaml - networkExplorer: - reducer: - ingestShards: 1 - matchingShards: 1 - aggregationShards: 1 + reducer: + ingestShards: 1 + matchingShards: 1 + aggregationShards: 1 -Example -+++++++++ - -The following example uses 4 shards per reducer stage. +The following example uses 4 shards per reducer stage: - .. code-block:: yaml + .. code-block:: yaml - networkExplorer: - reducer: - ingestShards: 4 - matchingShards: 4 - aggregationShards: 4 + reducer: + ingestShards: 4 + matchingShards: 4 + aggregationShards: 4 -Estimate reducer CPU and memory usage -+++++++++++++++++++++++++++++++++++++++ To estimate the CPU and memory usage the reducer might require from a node, you can use these simple formulas: :: @@ -381,7 +416,7 @@ To estimate the CPU and memory usage the reducer might require from a node, you Memory in Mebibytes (Mi) = 4 * Number of nodes in your cluster + 60 Fractional CPU in milliCPU (m) = Number of nodes in your cluster + 30 -This gives you an approximate expected usage. Multiply the final numbers by a factor of 1.5 or 2 to give headroom for growth and spikes in usage. +This gives you an approximate expected usage. Multiply the final numbers by a factor of 1.5 or 2 to give room for growth and spikes in usage. .. _customize-network-explorer-metrics: @@ -389,195 +424,181 @@ This gives you an approximate expected usage. Multiply the final numbers by a fa Customize network telemetry generated by Network Explorer ------------------------------------------------------------- -If you want to collect fewer or more network telemetry metrics, you can update the :new-page:`Splunk Distribution of OpenTelemetry Collector values file `. +If you want to collect fewer or more network telemetry metrics, you can update the :new-page:`OpenTelemetry Collector eBPF values file `. The following sections show you how to turn off or turn on different metrics. Turn on all metrics, including metrics turned off by default ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml + + reducer: + disableMetrics: + - none - networkExplorer: - reducer: - disableMetrics: - - none - Turn off entire metric categories ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml - - networkExplorer: - reducer: - disableMetrics: - - tcp.all - - udp.all - - dns.all - - http.all + .. code-block:: yaml + + reducer: + disableMetrics: + - tcp.all + - udp.all + - dns.all + - http.all Turn off an individual TCP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - - networkExplorer: - reducer: - disableMetrics: - - tcp.bytes - - tcp.rtt.num_measurements - - tcp.active - - tcp.rtt.average - - tcp.packets - - tcp.retrans - - tcp.syn_timeouts - - tcp.new_sockets - - tcp.resets + .. code-block:: yaml + + reducer: + disableMetrics: + - tcp.bytes + - tcp.rtt.num_measurements + - tcp.active + - tcp.rtt.average + - tcp.packets + - tcp.retrans + - tcp.syn_timeouts + - tcp.new_sockets + - tcp.resets Turn off an individual UDP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - networkExplorer: - reducer: - disableMetrics: - - udp.bytes - - udp.packets - - udp.active - - udp.drops + .. code-block:: yaml + + reducer: + disableMetrics: + - udp.bytes + - udp.packets + - udp.active + - udp.drops Turn off an individual DNS metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - networkExplorer: - reducer: - disableMetrics: - - dns.client.duration.average - - dns.server.duration.average - - dns.active_sockets - - dns.responses - - dns.timeouts + .. code-block:: yaml + + reducer: + disableMetrics: + - dns.client.duration.average + - dns.server.duration.average + - dns.active_sockets + - dns.responses + - dns.timeouts Turn off an individual HTTP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - networkExplorer: - reducer: - disableMetrics: - - http.client.duration.average - - http.server.duration.average - - http.active_sockets - - http.status_code + .. code-block:: yaml + + reducer: + disableMetrics: + - http.client.duration.average + - http.server.duration.average + - http.active_sockets + - http.status_code Turn off an internal metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml - networkExplorer: - reducer: - disableMetrics: - - ebpf_net.bpf_log - - ebpf_net.otlp_grpc.bytes_sent - - ebpf_net.otlp_grpc.failed_requests - - ebpf_net.otlp_grpc.metrics_sent - - ebpf_net.otlp_grpc.requests_sent - - ebpf_net.otlp_grpc.successful_requests - - ebpf_net.otlp_grpc.unknown_response_tags + reducer: + disableMetrics: + - ebpf_net.bpf_log + - ebpf_net.otlp_grpc.bytes_sent + - ebpf_net.otlp_grpc.failed_requests + - ebpf_net.otlp_grpc.metrics_sent + - ebpf_net.otlp_grpc.requests_sent + - ebpf_net.otlp_grpc.successful_requests + - ebpf_net.otlp_grpc.unknown_response_tags -.. note:: This list represents the set of internal metrics which are enabled by default. +.. note:: This list represents the set of internal metrics which are activated by default. Turn on entire metric categories ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml + .. code-block:: yaml - networkExplorer: - reducer: - enableMetrics: - - tcp.all - - udp.all - - dns.all - - http.all - - ebpf_net.all + reducer: + enableMetrics: + - tcp.all + - udp.all + - dns.all + - http.all + - ebpf_net.all Turn on an individual TCP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - .. code-block:: yaml - - networkExplorer: - reducer: - enableMetrics: - - tcp.bytes - - tcp.rtt.num_measurements - - tcp.active - - tcp.rtt.average - - tcp.packets - - tcp.retrans - - tcp.syn_timeouts - - tcp.new_sockets - - tcp.resets + .. code-block:: yaml + + reducer: + enableMetrics: + - tcp.bytes + - tcp.rtt.num_measurements + - tcp.active + - tcp.rtt.average + - tcp.packets + - tcp.retrans + - tcp.syn_timeouts + - tcp.new_sockets + - tcp.resets Turn on an individual UDP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - networkExplorer: - reducer: - enableMetrics: - - udp.bytes - - udp.packets - - udp.active - - udp.drops + .. code-block:: yaml + + reducer: + enableMetrics: + - udp.bytes + - udp.packets + - udp.active + - udp.drops Turn on an individual DNS metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - networkExplorer: - reducer: - enableMetrics: - - dns.client.duration.average - - dns.server.duration.average - - dns.active_sockets - - dns.responses - - dns.timeouts + .. code-block:: yaml + + reducer: + enableMetrics: + - dns.client.duration.average + - dns.server.duration.average + - dns.active_sockets + - dns.responses + - dns.timeouts Turn on an individual HTTP metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - networkExplorer: - reducer: - enableMetrics: - - http.client.duration.average - - http.server.duration.average - - http.active_sockets - - http.status_code + .. code-block:: yaml + + reducer: + enableMetrics: + - http.client.duration.average + - http.server.duration.average + - http.active_sockets + - http.status_code Turn on an internal metric ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - - .. code-block:: yaml - networkExplorer: - reducer: - enableMetrics: - - ebpf_net.span_utilization_fraction - - ebpf_net.pipeline_metric_bytes_discarded - - ebpf_net.codetiming_min_ns - - ebpf_net.entrypoint_info - - ebpf_net.otlp_grpc.requests_sent + .. code-block:: yaml + + reducer: + enableMetrics: + - ebpf_net.span_utilization_fraction + - ebpf_net.pipeline_metric_bytes_discarded + - ebpf_net.codetiming_min_ns + - ebpf_net.entrypoint_info + - ebpf_net.otlp_grpc.requests_sent .. note:: This list does not include the entire set of internal metrics. @@ -586,30 +607,28 @@ Example In the following example, all HTTP metrics along with certain individual TCP and UDP metrics are deactivated. All DNS metrics are collected. - .. code-block:: yaml + .. code-block:: yaml - networkExplorer: - reducer: - disableMetrics: - - http.all - - tcp.syn_timeouts - - tcp.new_sockets - - tcp.resets - - udp.bytes - - udp.packets + reducer: + disableMetrics: + - http.all + - tcp.syn_timeouts + - tcp.new_sockets + - tcp.resets + - udp.bytes + - udp.packets In the following example, all HTTP metrics along with certain individual internal metrics are turned on. - .. note:: The ``disableMetrics`` flag is evaluated before the ``enableMetrics`` flag. +.. note:: The ``disableMetrics`` flag is evaluated before the ``enableMetrics`` flag. - .. code-block:: yaml +.. code-block:: yaml - networkExplorer: - reducer: - enableMetrics: - - http.all - - ebpf_net.codetiming_min_ns - - ebpf_net.entrypoint_info + reducer: + enableMetrics: + - http.all + - ebpf_net.codetiming_min_ns + - ebpf_net.entrypoint_info Next steps ==================================== @@ -618,16 +637,16 @@ Once you set up Network Explorer, you can start monitoring network telemetry met - Built-in Network Explorer navigators. To see the Network Explorer navigators, follow these steps: - #. From the Splunk Observability Cloud home page, select :strong:`Infrastructure` on the left navigator. - #. Select :strong:`Network Explorer`. + #. From the Splunk Observability Cloud home page, select :strong:`Infrastructure` on the left navigator. + #. Select :strong:`Network Explorer`. .. image:: /_images/images-network-explorer/network-explorer-navigators.png - :alt: Network Explorer navigator tiles on the Infrastructure landing page. - :width: 80% + :alt: Network Explorer navigator tiles on the Infrastructure landing page. + :width: 80% - #. Select the card for the Network Explorer navigator you want to view. + #. Select the card for the Network Explorer navigator you want to view. - For more information, see :ref:`use-navigators-imm`. +For more information, see :ref:`use-navigators-imm`. - Service map. For more information, see :ref:`network-explorer-network-map`. - Alerts and detectors. For more information, see :ref:`get-started-detectoralert`. From a499da9fba5c6b7b6bff967a78d5815ab9edaa32 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 23 Nov 2023 11:39:22 +0100 Subject: [PATCH 07/12] Migration docs --- .../network-explorer-setup.rst | 48 ++++++++++++++++++- 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/infrastructure/network-explorer/network-explorer-setup.rst b/infrastructure/network-explorer/network-explorer-setup.rst index d32fbecb8..38f4aa04d 100644 --- a/infrastructure/network-explorer/network-explorer-setup.rst +++ b/infrastructure/network-explorer/network-explorer-setup.rst @@ -630,10 +630,56 @@ In the following example, all HTTP metrics along with certain individual interna - ebpf_net.codetiming_min_ns - ebpf_net.entrypoint_info +.. _ebpf-chart-migrate: + +Migrate from networkExplorer to eBPF Helm chart +========================================================= + +Starting from version 0.88 of the Helm chart, the ``networkExplorer`` setting of the Splunk OpenTelemetry Collector Helm chart is deprecated. ``networkExplorer`` settings are fully compatible with the OpenTelemetry Collector eBPF Helm chart, which is supported by Network Explorer. + +To migrate to the OpenTelemetry Collector eBPF Helm chart, follow these steps: + +1. Make sure that the Splunk OpenTelemetry Collector Helm chart is installed in data forwarding (Gateway) mode: + + .. code-block:: yaml + + gateway: + enabled: true + +2. Disable the ``networkExplorer`` setting in the Splunk OpenTelemetry Collector Helm chart: + + .. code-block:: yaml + + networkExplorer: + enabled: false + +3. Retrieve the name of the Splunk OpenTelemetry Collector gateway service: + + .. code-block:: shell + + kubectl get svc | grep splunk-otel-collector-gateway + +4. Install the upstream OpenTelemetry Collector eBPF Helm chart pointing to the Splunk OpenTelemetry Collector gateway service: + + .. code-block:: shell + + helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts + helm repo update open-telemetry + helm install my-opentelemetry-ebpf -f ./otel-ebpf-values.yaml open-telemetry/opentelemetry-ebpf + +The otel-ebpf-values.yaml file must have the ``endpoint.address`` option set to the Splunk OpenTelemetry Collector gateway service name captured in the third step. + +.. code-block:: yaml + + endpoint: + address: + +Additionally, if you had any custom settings in the ``networkExplorer`` section, you need to move them to the otel-ebpf-values.yaml file. See the :new-page:`OpenTelemetry Collector eBPF values file ` for more information. + Next steps ==================================== -Once you set up Network Explorer, you can start monitoring network telemetry metrics coming into your Splunk Infrastructure Monitoring platform using one or more of the following options: +Once you set up Network Explorer, you can start monitoring network telemetry metrics coming into your Splunk Infrastructure Monitoring platform using 1 or more of the following options: - Built-in Network Explorer navigators. To see the Network Explorer navigators, follow these steps: From 4881f316939c2ec191dc36c64061ec7f9d5e3083 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 23 Nov 2023 11:42:35 +0100 Subject: [PATCH 08/12] Remove a reference --- gdi/opentelemetry/deployment-modes.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/gdi/opentelemetry/deployment-modes.rst b/gdi/opentelemetry/deployment-modes.rst index 20fb17a95..010adbd3c 100644 --- a/gdi/opentelemetry/deployment-modes.rst +++ b/gdi/opentelemetry/deployment-modes.rst @@ -95,13 +95,12 @@ To change the deployment mode, modify ``SPLUNK_CONFIG`` for the path to the gate Kubernetes ---------------------------------- -The Collector for Kubernetes has different deployment options. You can configure them using the ``enabled`` field in their respective Helm value mappings. See :ref:`otel-kubernetes-config-advanced` for information on how to access your configuration yaml. +The Collector for Kubernetes has different deployment options. You can configure them using the ``enabled`` field in their respective Helm value mappings. See :ref:`otel-kubernetes-config-advanced` for information on how to access your configuration yaml. The main deployment modes are: * Default, which includes the ``agent`` deamonset and the ``clusterReceiver`` deployment component. * All collector modes, which includes ``agent`` deamonset, and the ``clusterReceiver`` and the ``gateway`` components. -* Network explorer deployment mode, which uses the ``networkExplorer.kernelCollector`` daemonset and ``networkExplorer.k8sCollector`` config. See more in :ref:`network-explorer-setup`. For more information on the components on each mode, see :ref:`helm-chart-components`. From 185d394a79636d2595804303b3cc6eba4b0b0ad6 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Thu, 23 Nov 2023 11:58:50 +0100 Subject: [PATCH 09/12] Indentation --- infrastructure/network-explorer/network-explorer-setup.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/infrastructure/network-explorer/network-explorer-setup.rst b/infrastructure/network-explorer/network-explorer-setup.rst index 38f4aa04d..d6da66351 100644 --- a/infrastructure/network-explorer/network-explorer-setup.rst +++ b/infrastructure/network-explorer/network-explorer-setup.rst @@ -644,14 +644,14 @@ To migrate to the OpenTelemetry Collector eBPF Helm chart, follow these steps: .. code-block:: yaml gateway: - enabled: true + enabled: true 2. Disable the ``networkExplorer`` setting in the Splunk OpenTelemetry Collector Helm chart: .. code-block:: yaml networkExplorer: - enabled: false + enabled: false 3. Retrieve the name of the Splunk OpenTelemetry Collector gateway service: From b49754e5b5a3f17b77c439010966953641e2e7a0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Piotr=20Kie=C5=82kowicz?= Date: Thu, 23 Nov 2023 13:36:52 +0100 Subject: [PATCH 10/12] OTel .NET metrics - fix formatting --- .../otel-dotnet/configuration/dotnet-metrics-attributes.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst b/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst index f6510c7c0..8aaa53be4 100644 --- a/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst +++ b/gdi/get-data-in/application/otel-dotnet/configuration/dotnet-metrics-attributes.rst @@ -141,7 +141,7 @@ ASP.NET Core * - ``http.server.request.duration_{bucket|count|sum}`` - Cumulative counters (histogram) - Duration of HTTP server requests. Supported only on .NET8+. - * - kestrel.active_connections + * - ``kestrel.active_connections`` - Gauge - Number of connections that are currently active on the server. Supported only on .NET8+. * - ``kestrel.connection.duration_{bucket|count|sum}`` @@ -245,4 +245,4 @@ NServiceBus - Number of messages retrieved from the queue by the endpoint. * - ``nservicebus.messaging.failures`` - Cumulative counter - - Number of messages unsuccessfully processed by the endpoint. \ No newline at end of file + - Number of messages unsuccessfully processed by the endpoint. From 134463d64f3d5addd73572f3e471e4c8ad8b00d1 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Tue, 28 Nov 2023 13:03:10 +0100 Subject: [PATCH 11/12] Remove manual NPM steps --- .../network-explorer-setup-non-k8s.rst | 155 ------------------ .../network-explorer-setup.rst | 2 - 2 files changed, 157 deletions(-) delete mode 100644 infrastructure/network-explorer/network-explorer-setup-non-k8s.rst diff --git a/infrastructure/network-explorer/network-explorer-setup-non-k8s.rst b/infrastructure/network-explorer/network-explorer-setup-non-k8s.rst deleted file mode 100644 index 3585f3a54..000000000 --- a/infrastructure/network-explorer/network-explorer-setup-non-k8s.rst +++ /dev/null @@ -1,155 +0,0 @@ -.. _network-explorer-setup-non-k8s: - -************************************************************************** -Set up Network Explorer on non-Kubernetes systems -************************************************************************** - -.. meta:: - :description: Install and configure Network Explorer on non-Kubernetes systems - -To use Network Explorer on non-Kubernetes systems, you must install the Extended Berkeley Packet Filter (eBPF) collector using the appropriate packaging system, RPM Package Manager (RPM) or dpkg. - -Install the eBPF collector -============================== - -Follow these steps to install and configure the eBPF collector on non-Kubernetes systems: - -#. Download the eBPF packages from the :new-page:`GitHub releases page `. -#. Run the following commands to install the reducer, the kernel collector, and the cloud collector components. - - .. tabs:: - - .. code-tab:: bash RPM - - rpm -i opentelemetry-ebpf-reducer-.rpm - rpm -i opentelemetry-ebpf-kernel-collector-.rpm - rpm -i opentelemetry-ebpf-cloud-collector-.rpm - - .. code-tab:: bash dpkg - - dpkg -i opentelemetry-ebpf-reducer-.deb - dpkg -i opentelemetry-ebpf-kernel-collector-.deb - dpkg -i opentelemetry-ebpf-cloud-collector-.deb - - .. note:: - * Install the reducer on only one node. - * Install the kernel collector on all nodes in a cluster. - * If a cluster is within Amazon Web Services, install the cloud collector on one node. - - -#. Edit the /etc/opentelemetry-ebpf/reducer.yaml file to configure the reducer. - - * If you use Splunk Distribution of OpenTelemetry Collector, edit the file according to the following table: - - .. list-table:: - :header-rows: 1 - :widths: 50 50 - - * - :strong:`Parameter` - - :strong:`Value` - * - ``enable_otlp_grpc_metrics`` - - ``true`` - * - ``otlp_grpc_metrics_address`` - - Host name or IP address of the OTLP gRPC receiver - * - ``disable_prometheus_metrics`` - - ``true`` - - * If you scrape with Prometheus, edit the file according to the following table: - - .. list-table:: - :header-rows: 1 - :widths: 50 50 - - * - :strong:`Parameter` - - :strong:`Value` - * - ``prom_bind`` - - IP address and port number on which Prometheus scrapes metrics - * - ``disable_prometheus_metrics`` - - ``false`` - - * If you use the cloud collector, set ``enable_aws_enrichment`` to ``true``. - -#. Run the following command to start or restart the reducer to apply the changes. - - .. tabs:: - - .. code-tab:: bash Start command - - systemctl start reducer - - .. code-tab:: bash Restart command - - systemctl restart reducer - -#. Edit the /etc/opentelemetry-ebpf/kernel-collector.yaml file to configure the kernel collector. Set the values according to the following table. - - .. list-table:: - :header-rows: 1 - :widths: 50 50 - - * - :strong:`Parameter` - - :strong:`Value` - * - Intake host - - IP address or host name where the reducer is running - * - Intake port - - Same value as ``telemetry_port`` in the reducer.yaml file - -#. Run the following command to start or restart the kernel collector to apply the changes. - - .. tabs:: - - .. code-tab:: bash Start command - - systemctl start kernel-collector - - .. code-tab:: bash Restart command - - systemctl restart kernel-collector - -#. Edit the /etc/opentelemetry-ebpf/cloud-collector.yaml file to configure the kernel collector. Set the values according to the following table. - - .. list-table:: - :header-rows: 1 - :widths: 50 50 - - * - :strong:`Parameter` - - :strong:`Value` - * - Intake host - - IP address or host name where the reducer is running - * - Intake port - - Same value as ``telemetry_port`` in the reducer.yaml file - -#. Run the following command to start or restart the cloud collector to apply the changes. - - .. tabs:: - - .. code-tab:: bash Start command - - systemctl start cloud-collector - - .. code-tab:: bash Restart command - - systemctl restart cloud-collector - -Next steps -==================================== - -Once you set up Network Explorer, you can start monitoring network telemetry metrics coming into your Splunk Infrastructure Monitoring platform using 1 or more of the following options: - -- Built-in Network Explorer navigators. To see the Network Explorer navigators, follow these steps: - - #. From the Splunk Observability Cloud home page, select :strong:`Infrastructure` on the navigator. - #. Select :strong:`Network Explorer`. - - .. image:: /_images/images-network-explorer/network-explorer-navigators.png - :alt: Network Explorer navigator tiles on the Infrastructure landing page. - :width: 80% - - #. Select the card for the Network Explorer navigator you want to view. - - For more information, see :ref:`use-navigators-imm`. - -- Service map. For more information, see :ref:`network-explorer-network-map`. -- Alerts and detectors. For more information, see :ref:`get-started-detectoralert`. - -For more information on metrics available to collect with Network Explorer, see :ref:`network-explorer-metrics`. diff --git a/infrastructure/network-explorer/network-explorer-setup.rst b/infrastructure/network-explorer/network-explorer-setup.rst index d6da66351..79d951ddc 100644 --- a/infrastructure/network-explorer/network-explorer-setup.rst +++ b/infrastructure/network-explorer/network-explorer-setup.rst @@ -9,8 +9,6 @@ Set up Network Explorer in Kubernetes You can install and configure Network Explorer as part of the Splunk Distribution of OpenTelemetry Collector Helm chart. You also need the OpenTelemetry Collector eBPF Helm chart. -To install Network Explorer in systems not using Kubernetes, see :ref:`network-explorer-setup-non-k8s`. - Prerequisites ============================== From f25da5b570813b570a32396abc353034e3e1aae2 Mon Sep 17 00:00:00 2001 From: Fabrizio Ferri-Benedetti Date: Tue, 28 Nov 2023 13:04:52 +0100 Subject: [PATCH 12/12] Remove labels --- infrastructure/network-explorer/network-explorer.rst | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/infrastructure/network-explorer/network-explorer.rst b/infrastructure/network-explorer/network-explorer.rst index 55c2b9929..e11bf9ee7 100644 --- a/infrastructure/network-explorer/network-explorer.rst +++ b/infrastructure/network-explorer/network-explorer.rst @@ -10,18 +10,16 @@ Network Explorer in Splunk Infrastructure Monitoring network-explorer-intro network-explorer-setup - network-explorer-setup-non-k8s network-explorer-network-map network-explorer-metrics network-explorer-scenarios/network-explorer-scenarios network-explorer-troubleshoot -Use the following links to navigate the documentation set for Network Explorer in Splunk Infrastructure Monitoring: +Use the following links to navigate the documentation set for Network Explorer in Splunk Infrastructure Monitoring: * :ref:`network-explorer-intro` * :ref:`network-explorer-setup` - * :ref:`network-explorer-setup-non-k8s` * :ref:`network-explorer-network-map` * :ref:`network-explorer-metrics` * :ref:`network-explorer-scenarios`