* refactor `debugd` file structure * create `hack`-tool to deploy logcollection to non-debug clusters * integrate changes into CI * update fields * update workflow input names * use `working-directory` * add opensearch creds to upgrade workflow * make template func generic * make templating func generic * linebreaks * remove magic defaults * move `os.Exit` to main package * make logging index configurable * make templating generic * remove excess brace * update fields * copy fields * fix flag name * fix linter warnings Signed-off-by: Paul Meyer <49727155+katexochen@users.noreply.github.com> * remove unused workflow inputs * remove makefiles * fix command * bazel: fix output paths of container This fixes the output paths of builds within the container by mounting directories to paths that exist on the host. We also explicitly set the output path in a .bazelrc to the user specific path. The rc file is mounted into the container and overrides the host rc. Also adding automatic stop in case start is called and a containers is already running. Sym links like bazel-out and paths bazel outputs should generally work with this change. Signed-off-by: Paul Meyer <49727155+katexochen@users.noreply.github.com> * tabs -> spaces --------- Signed-off-by: Paul Meyer <49727155+katexochen@users.noreply.github.com> Co-authored-by: Paul Meyer <49727155+katexochen@users.noreply.github.com>
3.0 KiB
Logcollection
One can deploy Filebeat and Logstash to enable collection of logs to OpenSearch, which allows for agreggation and easy inspection of said logs. The logcollection functionality can be deployed to both debug and non-debug clusters.
Deployment in Debug Clusters
In debug clusters, logcollection functionality should be deployed automatically through the debug daemon debugd
, which runs before the bootstrapper
and can therefore, contrary to non-debug clusters, also collect logs of the bootstrapper.
Deployment in Non-Debug Clusters
In non-debug clusters, logcollection functionality needs to be explicitly deployed as a Kubernetes Deployment through Helm. To do that, a few steps need to be followed:
-
Template the deployment configuration through the
loco
CLI.bazel run //hack/logcollector template -- \ --dir $(realpath .) \ --username <OPENSEARCH_USERNAME> \ --password <OPENSEARCH_PW> \ --info deployment-type={k8s, debugd} ...
This will place the templated configuration in the current directory. OpenSearch user credentials can be created by any admin in OpenSearch. Logging in with your company CSP accounts should grant you sufficient permissions to create a user and grant him the required
all_access
role. One can add additional key-value pairs to the configuration by appending--info key=value
to the command. These key-value pairs will be attached to the log entries and can be used to filter them in OpenSearch. For example, it might be helpful to add atest=<xyz>
tag to be able to filter out logs from a specific test run. -
Deploy Logstash
cd logstash make add make install cd ..
This will add the required Logstash Helm charts and deploy them to your cluster.
-
Deploy Filebeat
cd filebeat make add make install cd ..
This will add the required Filebeat Helm charts and deploy them to your cluster.
To remove Logstash or Filebeat, cd
into the corresponding directory and run make remove
.
Inspecting Logs in OpenSearch
To search through logs in OpenSearch, head to the discover page in the
OpenSearch dashboard and configure the timeframe selector in the top right accordingly.
Click Refresh
. You can now see all logs recorded in the specified timeframe. To get a less cluttered view, select the fields you want to inspect in the left sidebar.