docs: remove broken links and publish removal of cloud logging

This commit is contained in:
Thomas Tendyck 2024-02-22 21:58:34 +01:00 committed by Thomas Tendyck
parent 2a61861a1c
commit 31baba2d4b
5 changed files with 9 additions and 52 deletions

View File

@ -33,11 +33,7 @@ The payload is an actual message emitted from your system along with a metadata
### System logs
Constellation uses cloud logging for events occurring during the early stages of a node's boot process.
These logs include [Bootstrapper](./microservices.md#bootstrapper) events and [state disk UUIDs](../architecture/images.md#state-disk).
You can access the cloud logging [directly via the cloud provider endpoints](../workflows/troubleshooting.md#cloud-logging).
More detailed system-level logs are accessible via `/var/log` and [journald](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html) on the nodes directly.
Detailed system-level logs are accessible via `/var/log` and [journald](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html) on the nodes directly.
They can be collected from there, for example, via [Filebeat and Logstash](https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html), which are tools of the [Elastic Stack](https://www.elastic.co/de/elastic-stack/).
In case of an error during the initialization, the CLI automatically collects the [Bootstrapper](./microservices.md#bootstrapper) logs and returns these as a file for [troubleshooting](../workflows/troubleshooting.md). Here is an example of such an event:

View File

@ -13,7 +13,7 @@ The first step to recovery is identifying when a cluster becomes unhealthy.
Usually, this can be first observed when the Kubernetes API server becomes unresponsive.
You can check the health status of the nodes via the cloud service provider (CSP).
Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging).
Constellation provides logging information on the boot process and status via serial console output.
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each CSP.
<tabs groupId="csp">

View File

@ -33,11 +33,7 @@ The payload is an actual message emitted from your system along with a metadata
### System logs
Constellation uses cloud logging for events occurring during the early stages of a node's boot process.
These logs include [Bootstrapper](./microservices.md#bootstrapper) events and [state disk UUIDs](../architecture/images.md#state-disk).
You can access the cloud logging [directly via the cloud provider endpoints](../workflows/troubleshooting.md#cloud-logging).
More detailed system-level logs are accessible via `/var/log` and [journald](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html) on the nodes directly.
Detailed system-level logs are accessible via `/var/log` and [journald](https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html) on the nodes directly.
They can be collected from there, for example, via [Filebeat and Logstash](https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html), which are tools of the [Elastic Stack](https://www.elastic.co/de/elastic-stack/).
In case of an error during the initialization, the CLI automatically collects the [Bootstrapper](./microservices.md#bootstrapper) logs and returns these as a file for [troubleshooting](../workflows/troubleshooting.md). Here is an example of such an event:

View File

@ -13,7 +13,7 @@ The first step to recovery is identifying when a cluster becomes unhealthy.
Usually, this can be first observed when the Kubernetes API server becomes unresponsive.
You can check the health status of the nodes via the cloud service provider (CSP).
Constellation provides logging information on the boot process and status via [cloud logging](troubleshooting.md#cloud-logging).
Constellation provides logging information on the boot process and status via serial console output.
In the following, you'll find detailed descriptions for identifying clusters stuck in recovery for each CSP.
<tabs groupId="csp">

View File

@ -95,49 +95,14 @@ check if the encountered [issue is known](https://github.com/edgelesssys/constel
## Diagnosing issues
### Cloud logging
### Logs
To provide information during early stages of a node's boot process, Constellation logs messages to the log systems of the cloud providers. Since these offerings **aren't** confidential, only generic information without any sensitive values is stored. This provides administrators with a high-level understanding of the current state of a node.
To get started on diagnosing issues with Constellation, it's often helpful to collect logs from nodes, pods, or other resources in the cluster. Most logs are available through Kubernetes' standard
[logging interfaces](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
You can view this information in the following places:
To debug issues occurring at boot time of the nodes, you can use the serial console interface of the CSP while the machine boots to get a read-only view of the boot logs.
<tabs groupId="csp">
<tabItem value="azure" label="Azure">
1. In your Azure subscription find the Constellation resource group.
2. Inside the resource group find the Application Insights resource called `constellation-insights-*`.
3. On the left-hand side go to `Logs`, which is located in the section `Monitoring`.
- Close the Queries page if it pops up.
5. In the query text field type in `traces`, and click `Run`.
To **find the disk UUIDs** use the following query: `traces | where message contains "Disk UUID"`
</tabItem>
<tabItem value="gcp" label="GCP">
1. Select the project that hosts Constellation.
2. Go to the `Compute Engine` service.
3. On the right-hand side of a VM entry select `More Actions` (a stacked ellipsis)
- Select `View logs`
To **find the disk UUIDs** use the following query: `resource.type="gce_instance" text_payload=~"Disk UUID:.*\n" logName=~".*/constellation-boot-log"`
:::info
Constellation uses the default bucket to store logs. Its [default retention period is 30 days](https://cloud.google.com/logging/quotas#logs_retention_periods).
:::
</tabItem>
<tabItem value="aws" label="AWS">
1. Open [AWS CloudWatch](https://console.aws.amazon.com/cloudwatch/home)
2. Select [Log Groups](https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups)
3. Select the log group that matches the name of your cluster.
4. Select the log stream for control or worker type nodes.
</tabItem>
</tabs>
Apart from that, Constellation also offers further [observability integrations](../architecture/observability.md).
### Node shell access