Skip to content

Commit

Permalink
Merge pull request #18910 from brnhensley/patch-1
Browse files Browse the repository at this point in the history
chore: code formatting
  • Loading branch information
akristen authored Oct 9, 2024
2 parents 9d98f87 + eebe35c commit 8797534
Show file tree
Hide file tree
Showing 19 changed files with 184 additions and 161 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,6 @@ newrelic_remote_write:
extra_write_relabel_configs:
# Enable the extra_write_relabel_configs below for backwards compatibility with legacy POMI labels.
# This helpful when migrating from POMI to ensure that Prometheus metrics will contain both labels (e.g. cluster_name and clusterName).
# For more migration info, please visit the [migration guide](/docs/infrastructure/prometheus-integrations/install-configure-prometheus-agent/migration-guide/).
- source_labels: [namespace]
action: replace
target_label: namespaceName
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ To configure the agent using Helm, you should set up your `values.yaml` in one o
>
```yaml
global:
licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_
cluster: _K8S_CLUSTER_NAME_
licenseKey: YOUR_NEW_RELIC_LICENSE_KEY
cluster: K8S_CLUSTER_NAME

newrelic-prometheus-agent:
enabled: true
Expand All @@ -44,8 +44,8 @@ To configure the agent using Helm, you should set up your `values.yaml` in one o
This option is only recommended if you're an advanced user.

```yaml
licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_
cluster: _K8S_CLUSTER_NAME_
licenseKey: YOUR_NEW_RELIC_LICENSE_KEY
cluster: K8S_CLUSTER_NAME
config:
# YOUR CONFIGURATION GOES HERE. An example:
Expand Down Expand Up @@ -75,8 +75,8 @@ app_values: ["redis", "traefik", "calico", "nginx", "coredns", "kube-dns", "etcd
Moreover, it might happen that a new version of the integrations filters may cause a target that was already scraped by one job to be scraped a second time.
In order to receive a notification in the event of duplicated data (and prevent duplicate scraping altogether), you can create an alert based on the following query:

```
FROM Metric select uniqueCount(job) facet instance, cluster_name limit 10 since 2 minutes ago
```sql
FROM Metric SELECT uniqueCount(job) FACET instance, cluster_name LIMIT 10 SINCE 2 minutes ago
```

If any value is different from 1 then you have two or more jobs scraping the same instance in the same cluster.
Expand Down Expand Up @@ -106,19 +106,19 @@ The following example only scrapes `Pods` and `Endpoints` with the `newrelic.io/
```yaml
kubernetes:
jobs:
- job_name_prefix: example
integrations_filter:
enabled: false
target_discovery:
pod: true
endpoints: true
filter:
annotations:
# <string>: <regex>
newrelic.io/scrape: 'true'
label:
# <string>: <regex>
k8s.io/app: '(postgres|mysql)'
- job_name_prefix: example
integrations_filter:
enabled: false
target_discovery:
pod: true
endpoints: true
filter:
annotations:
# <string>: <regex>
newrelic.io/scrape: "true"
label:
# <string>: <regex>
k8s.io/app: "(postgres|mysql)"
```

<Callout variant="tip">
Expand Down Expand Up @@ -181,21 +181,21 @@ common:
scrape_interval: 30s
kubernetes:
jobs:
# this job will use the default scrape_interval defined in common.
- job_name_prefix: default-targets-with-30s-interval
target_discovery:
pod: true
filter:
annotations:
newrelic.io/scrape: 'true'
- job_name_prefix: slow-targets-with-60s-interval
scrape_interval: 60s
target_discovery:
pod: true
filter:
annotations:
newrelic.io/scrape_slow: 'true'
# this job will use the default scrape_interval defined in common.
- job_name_prefix: default-targets-with-30s-interval
target_discovery:
pod: true
filter:
annotations:
newrelic.io/scrape: "true"
- job_name_prefix: slow-targets-with-60s-interval
scrape_interval: 60s
target_discovery:
pod: true
filter:
annotations:
newrelic.io/scrape_slow: "true"
```

## Metric and label transformations [#metric-label-transformations]
Expand All @@ -210,21 +210,21 @@ Here's an example of how to use it in different parts of the YAML configuration

```yaml
static_targets:
- name: self-metrics
urls:
- 'http://static-service:8181'
extra_metric_relabel_config:
# Drop metrics with prefix 'go_' for this target.
- source_labels: [__name__]
regex: 'go_.+'
action: drop
- name: self-metrics
urls:
- "http://static-service:8181"
extra_metric_relabel_config:
# Drop metrics with prefix 'go_' for this target.
- source_labels: [__name__]
regex: "go_.+"
action: drop
newrelic_remote_write:
extra_write_relabel_configs:
# Drop all metrics with the specified name before sent to New Relic.
- source_labels: [__name__]
regex: 'metric_name'
action: drop
# Drop all metrics with the specified name before sent to New Relic.
- source_labels: [__name__]
regex: "metric_name"
action: drop
```

### YAML file snippet samples [#config-samples]
Expand Down Expand Up @@ -292,10 +292,10 @@ Add one of these examples in the YAML configuration file from the [metric and la
>
```yaml
- source_labels: [__name__]
regex: 'prefix_.+'
target_label: new_label
action: replace
replacement: newLabelValue
regex: 'prefix_.+'
target_label: new_label
action: replace
replacement: newLabelValue
```
</Collapser>

Expand All @@ -307,7 +307,7 @@ Add one of these examples in the YAML configuration file from the [metric and la

```yaml
- regex: 'label_name'
action: labeldrop
action: labeldrop
```
</Collapser>
</CollapserGroup>
Expand All @@ -328,36 +328,37 @@ Here are some examples to deal with targets that need access authorization:
```yaml
kubernetes:
jobs:
- job_name_prefix: skip-verify-on-https-targets
target_discovery:
pod: true
filter:
annotations:
newrelic.io/scrape: 'true'
- job_name_prefix: bearer-token
target_discovery:
pod: true
filter:
label:
k8s.io/app: my-app-with-token
authorization:
type: Bearer
credentials_file: '/etc/my-app/token'
- job_name_prefix: skip-verify-on-https-targets
target_discovery:
pod: true
filter:
annotations:
newrelic.io/scrape: "true"
- job_name_prefix: bearer-token
target_discovery:
pod: true
filter:
label:
k8s.io/app: my-app-with-token
authorization:
type: Bearer
credentials_file: "/etc/my-app/token"
static_targets:
jobs:
- job_name: mtls-target
scheme: https
targets:
- 'my-mtls-target:8181'
tls_config:
ca_file: '/etc/my-app/client-ca.crt'
cert_file: '/etc/my-app/client.crt'
key_file: '/etc/my-app/client.key'
- job_name: basic-auth-target
targets:
- 'my-basic-auth-static:8181'
basic_auth:
password_file: '/etc/my-app/pass.htpasswd'
- job_name: mtls-target
scheme: https
targets:
- "my-mtls-target:8181"
tls_config:
ca_file: "/etc/my-app/client-ca.crt"
cert_file: "/etc/my-app/client.crt"
key_file: "/etc/my-app/client.key"
- job_name: basic-auth-target
targets:
- "my-basic-auth-static:8181"
basic_auth:
password_file: "/etc/my-app/pass.htpasswd"
```
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ global:
Once this has been updated on the values files, run the following helm upgrade command:

```bash
helm upgrade <RELEASE_NAME> newrelic-prometheus-configurator/newrelic-prometheus-agent \
--namespace <NEWRELIC_NAMESPACE> \
-f values-newrelic.yaml \
[--version fixed-chart-version]
helm upgrade RELEASE_NAME newrelic-prometheus-configurator/newrelic-prometheus-agent \
--namespace NEWRELIC_NAMESPACE \
-f values-newrelic.yaml \
[--version fixed-chart-version]
```

## Not seeing metrics for a target [#target-with-no-metrics]
Expand All @@ -44,8 +44,8 @@ Some of the dashboards provided by the [Prometheus integrations](/docs/infrastru

Every target scrape generates the `up` metric with the all target metrics. If scraping is successful, these metrics have `1` as a value. If it's not successful, their value is `0`.

```SQL
FROM Metric SELECT latest(up) WHERE cluster_name= 'YOUR_CLUSTER_NAME' AND pod = 'TARGET_POD_NAME' TIMESERIES
```sql
FROM Metric SELECT latest(up) WHERE cluster_name = 'YOUR_CLUSTER_NAME' AND pod = 'TARGET_POD_NAME' TIMESERIES
```

If this metric doesn't exist for the target, it may have been dropped.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Use the following NRQL queries to understand the metrics being ingested in New R

* Use this command to verify that the Argo CD Prometheus endpoint is emitting metrics on any K8s node configured with Argo CD:

```
```sh
curl <Argo CD-Pod-IP>:8082/metrics
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,15 +79,15 @@ Use the following NRQL queries to understand the metrics being ingested in New R
* Estimate data ingestion (daily ingest, in bytes):

```sql
FROM Metric SELECT bytecountestimate() WHERE metricName LIKE 'felix_%' SINCE
1 day ago
FROM Metric SELECT bytecountestimate() WHERE metricName LIKE 'felix_%'
SINCE 1 day ago
```

## Troubleshooting

* Use this command to verify that he Calico Prometheus endpoint is emitting metrics on any K8s node configured with Calico CNI:

```
```sh
curl <Calico-Pod-IP>:9091/metrics
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Follow these steps to enable the integration.
3. Use the following query to confirm metrics are being ingested as expected:

```sql
SELECT * from Metric where metricName='rocksdb_num_sstables'
SELECT * FROM Metric WHERE metricName = 'rocksdb_num_sstables'
```

4. Install the [CockroachDB quickstart](https://newrelic.com/instant-observability/?search=cockroachdb) to access built-in <InlinePopover type="dashboards"/> and [alerts](/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/introduction-alerts/).
Expand All @@ -69,17 +69,17 @@ Follow these steps to enable the integration.

```yaml
- source_labels: [__name__]
separator: ;
regex: timeseries_write_(.*)
target_label: newrelic_metric_type
replacement: counter
action: replace
separator: ;
regex: timeseries_write_(.*)
target_label: newrelic_metric_type
replacement: counter
action: replace
- source_labels: [__name__]
separator: ;
regex: sql_byte(.*)
target_label: newrelic_metric_type
replacement: counter
action: replace
separator: ;
regex: sql_byte(.*)
target_label: newrelic_metric_type
replacement: counter
action: replace
```

## Find and use the data
Expand All @@ -100,19 +100,25 @@ Use the following NRQL queries to understand the CockroachDB metrics being inges
* List unique metric names:

```sql
FROM Metric SELECT uniques(metricName) WHERE (app_kubernetes_io_name LIKE '%cockroach%' or app_newrelic_com_name LIKE '%cockroach%' or k8s_app LIKE '%cockroach%') LIMIT MAX
FROM Metric SELECT uniques(metricName)
WHERE (app_kubernetes_io_name LIKE '%cockroach%' OR app_newrelic_com_name LIKE '%cockroach%' OR k8s_app LIKE '%cockroach%')
LIMIT MAX
```

* Count number of metric updates:

```sql
FROM Metric SELECT datapointcount() WHERE (app_kubernetes_io_name LIKE '%cockroach%' or app_newrelic_com_name LIKE '%cockroach%' or k8s_app LIKE '%cockroach%') FACET metricName
FROM Metric SELECT datapointcount()
WHERE (app_kubernetes_io_name LIKE '%cockroach%' OR app_newrelic_com_name LIKE '%cockroach%' OR k8s_app LIKE '%cockroach%')
FACET metricName
```

* Estimate data ingestion (daily ingest, in bytes):

```sql
FROM Metric SELECT bytecountestimate() WHERE (app_kubernetes_io_name LIKE '%cockroach%' or app_newrelic_com_name LIKE '%cockroach%' or k8s_app LIKE '%cockroach%') SINCE 1 day ago
FROM Metric SELECT bytecountestimate()
WHERE (app_kubernetes_io_name LIKE '%cockroach%' OR app_newrelic_com_name LIKE '%cockroach%' OR k8s_app LIKE '%cockroach%')
SINCE 1 day ago
```
</Collapser>

Expand All @@ -127,7 +133,7 @@ Use the following NRQL queries to understand the CockroachDB metrics being inges
* List unique metric names:

```sql
FROM Metric SELECT uniques(metricName) WHERE job='cockroachdb' LIMIT MAX
FROM Metric SELECT uniques(metricName) WHERE job = 'cockroachdb' LIMIT MAX
```

* Count number of metric updates:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Follow these steps to enable the integration.
3. Use the following query to confirm metrics are being ingested as expected:

```sql
FROM Metric SELECT count(*) WHERE metricName LIKE 'coredns_%' FACET metricName LIMIT MAX
FROM Metric SELECT count(*) WHERE metricName LIKE 'coredns_%' FACET metricName LIMIT MAX
```

4. Install the [CoreDNS quickstart](https://newrelic.com/instant-observability/CoreDNS) to access built-in dashboards and [alerts](/docs/alerts-applied-intelligence/new-relic-alerts/learn-alerts/introduction-alerts/).
Expand All @@ -64,13 +64,13 @@ Use the following NRQL queries to understand the metrics being ingested in New R
* List unique metric names:

```sql
FROM Metric SELECT uniques(metricName) WHERE metricName LIKE 'coredns_%' LIMIT MAX
FROM Metric SELECT uniques(metricName) WHERE metricName LIKE 'coredns_%' LIMIT MAX
```

* Count number of metric updates:

```sql
FROM Metric SELECT datapointcount() WHERE metricName LIKE 'coredns_%' LIMIT MAX
FROM Metric SELECT datapointcount() WHERE metricName LIKE 'coredns_%' LIMIT MAX
```

* Estimate data ingestion (daily ingest, in bytes):
Expand Down
Loading

0 comments on commit 8797534

Please sign in to comment.