Documentation for version: v0.2.0
.github/workflows/kind.yaml
Log configuration can be found here
logr is a logging API for Go. It provides a simple interface for logging under which there is an actual logger that implements the logr
interface.
Zap is a log library that implements logr
interface.
SDK-generated operators use the logr interface to log. Operator SDK by default uses a zap-based logger that is ready for production use. The default verbosity is set to info
level.
logr defines logger’s verbosity levels numerically. To write log lines that are more verbose, logr.Logger
has a V() method. The higher the V-level of a log line, the less critical it is considered.
Level V(0)
is the default, and logger.V(0).Info()
has the same meaning as logger.Info()
.
Levels in logr correspond to custom debug levels in Zap. Any given level in logr is represented by its inverse in zap (zapLevel = -1*logrLevel). Thus, in Zap, higher levels are more important.
For example: logr V(2) is equivalent to log level -2 in Zap, while logr V(1) is equivalent to debug level -1 in Zap.
To summarize:
Zap logging priority | Zap enum | logr |
---|---|---|
-1 | debug | .V(1).Info(...) |
0 | info | .V(0).Info(...) or .Info(...) |
1 | warn | N.A. |
2 | error | .Error(...) |
3 | dpanic | N.A. |
4 | panic | N.A. |
5 | fatal | N.A. |
oc
or kubectl
is configured properly to communicate with your cluster# Build the flotta operator
$ make build
# Make sure you have configured k8s provider docker
# For example for minikube by running: eval $(minikube -p minikube docker-env)
# Then run the build of the container
$ IMG=flotta-operator:latest HTTP_IMG=flotta-edge-api:latest make docker-build
# Deploy the operator on k8s
$ IMG=flotta-operator:latest HTTP_IMG=flotta-edge-api:latest make deploy
# Wait until the operator is ready
$ kubectl wait --timeout=120s --for=condition=Ready pods -l app=flotta-controller-manager -n flotta
$ kubectl port-forward deploy/flotta-edge-api -n flotta --address 0.0.0.0 8043:8043 &
$ make integration-test
kubectl wait timeouts:
If timeout, debug the deployment logs by running:
kubectl logs deploy/flotta-controller-manager -n flotta
Waiting for edge device timeouts in integration tests:
Debug the edge device container by executing shell inside the container:
docker exec -it edgedevice1 journalctl -u yggdrasild.service
Error: “No such image: quay.io/project-flotta/edgedevice:latest”
This error happens when KinD does not have the image stored in the control plane. To solve it, use
docker pull quay.io/project-flotta/edgedevice:latest
kind load docker-image quay.io/project-flotta/edgedevice:latest
command to upload the missing image.
How can I see the running containers in the device?
To list the podman managed pods running in a device, run this command in the target device:
docker exec -it edgedevice1 machinectl shell -q flotta@.host /usr/bin/podman pod ps
or
machinectl shell -q flotta@.host /usr/bin/podman pod ps
if you’re running it inside the container.
How can I see the logs produced by the flotta user in the device?
When running podman in rootless mode, the logs from the agent and podman are captured for the flotta user running in the device. Run the following command to see the journalctl logs:
machinectl shell -q flotta@.host /usr/bin/journalctl --user