Skip to content

Commit 43e2cb3

Browse files
chalintiffany76
andauthored
[chore] Refcache refresh, oldest link is now 2024-08-06 (#6167)
Co-authored-by: Tiffany Hrabusa <[email protected]>
1 parent 748555c commit 43e2cb3

File tree

6 files changed

+728
-1205
lines changed

6 files changed

+728
-1205
lines changed

.github/workflows/check-links.yml

+3-1
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,9 @@ jobs:
104104
- name: Fail when refcache contains entries with HTTP status 4XX
105105
run: |
106106
if grep -B 1 -e '"StatusCode": 4' static/refcache.json; then
107-
echo "Run 'npx gulp prune' to remove 4xx entries from the refcache"
107+
echo "Run 'npm run _refcache:prune' to remove 404 entries from refcache.json,"
108+
echo "or run './scripts/double-check-refcache-400s.mjs' locally to address"
109+
echo "other 400-status entries."
108110
exit 1
109111
fi
110112
- name: Does the refcache need updating?

content/en/docs/kubernetes/operator/troubleshooting/target-allocator.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ cSpell:ignore: bleh targetallocator
55

66
If you’ve enabled
77
[Target Allocator](/docs/kubernetes/operator/target-allocator/) service
8-
discovery on the [OpenTelemetry Operator](/docs/kubernetes/operator), and the
8+
discovery on the [OpenTelemetry Operator](/docs/kubernetes/operator/), and the
99
Target Allocator is failing to discover scrape targets, there are a few
1010
troubleshooting steps that you can take to help you understand what’s going on
1111
and restore normal operation.
@@ -21,9 +21,8 @@ Kubernetes cluster.
2121

2222
After you’ve deployed all of your resources to Kubernetes, make sure that the
2323
Target Allocator is discovering scrape targets from your
24-
[`ServiceMonitor`](https://prometheus-operator.dev/docs/operator/design/#servicemonitor)(s)
25-
or
26-
[`PodMonitor`](https://prometheus-operator.dev/docs/user-guides/getting-started/#using-podmonitors)(s).
24+
[`ServiceMonitor`](https://prometheus-operator.dev/docs/getting-started/design/#servicemonitor)(s)
25+
or [PodMonitor]s.
2726

2827
Suppose that you have this `ServiceMonitor` definition:
2928

@@ -386,9 +385,7 @@ Allocator will fail to discover scrape targets from that `ServiceMonitor`.
386385

387386
{{% alert title="Tip" %}}
388387

389-
The same applies if you’re using a
390-
[PodMonitor](https://prometheus-operator.dev/docs/user-guides/getting-started/#using-podmonitors).
391-
In that case, you would use a
388+
The same applies if you’re using a [PodMonitor]. In that case, you would use a
392389
[`podMonitorSelector`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/docs/api.md#opentelemetrycollectorspectargetallocatorprometheuscr)
393390
instead of a `serviceMonitorSelector`.
394391

@@ -513,3 +510,6 @@ If you’re using `PodMonitor`, the same applies, except that it picks up
513510
Kubernetes pods that match on labels, namespaces, and named ports.
514511

515512
{{% /alert %}}
513+
514+
[PodMonitor]:
515+
https://prometheus-operator.dev/docs/developer/getting-started/#using-podmonitors

data/ecosystem/vendors.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@
5555
- name: Causely
5656
distribution: false
5757
nativeOTLP: true
58-
url: https://github.com/Causely/documentation
58+
url: https://www.causely.ai/blog/using-opentelemetry-and-the-otel-collector-for-logs-metrics-and-traces
5959
6060
oss: false
6161
commercial: true
@@ -68,7 +68,7 @@
6868
- name: Chronosphere
6969
distribution: false
7070
nativeOTLP: true
71-
url: https://docs.chronosphere.io/ingest/otel/otel-ingest
71+
url: https://docs.chronosphere.io/ingest/
7272
7373
oss: false
7474
commercial: true
@@ -314,7 +314,7 @@
314314
commercial: true
315315
- name: Red Hat
316316
nativeOTLP: true
317-
url: https://docs.openshift.com/container-platform/4.14/otel/otel-release-notes.html
317+
url: https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/red_hat_build_of_opentelemetry/
318318
319319
oss: true
320320
commercial: true

scripts/double-check-refcache-400s.mjs

+7-2
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,18 @@ async function writeRefcache(cache) {
2020
console.log(`Updated ${CACHE_FILE} with fixed links.`);
2121
}
2222

23-
async function retry404sAndUpdateCache() {
23+
// Retry HTTP status check for refcache URLs with non-200s and not 404
24+
async function retry400sAndUpdateCache() {
2425
const cache = await readRefcache();
2526
let updated = false;
2627

2728
for (const [url, details] of Object.entries(cache)) {
2829
const { StatusCode, LastSeen } = details;
2930
if (isHttp2XX(StatusCode)) continue;
31+
if (StatusCode === 404) {
32+
console.log(`Skipping 404: ${url} (last seen ${LastSeen}).`);
33+
continue;
34+
}
3035

3136
process.stdout.write(`Checking: ${url} (was ${StatusCode})... `);
3237
const status = await getUrlStatus(url);
@@ -49,4 +54,4 @@ async function retry404sAndUpdateCache() {
4954
}
5055
}
5156

52-
await retry404sAndUpdateCache();
57+
await retry400sAndUpdateCache();

scripts/get-url-status.mjs

+1-1
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ export function isHttp2XX(status) {
6767
}
6868

6969
export async function getUrlStatus(url) {
70-
let status = 0; // await getUrlHeadless(url);
70+
let status = await getUrlHeadless(url);
7171
if (!isHttp2XX(status)) {
7272
status = await getUrlInBrowser(url);
7373
}

0 commit comments

Comments
 (0)