Logging is a valuable tool to discover and debug issues happening when using Open Zaak.
Issues can be of different nature, and a different logging approach is suitable to those. For example, the following types of events are logged in some form or another:
unexpected application errors, as a result of programming mistakes (bugs) - these logs are technical in nature and should be accessible for Open Zaak developers.
application server logs - startup and access logs of the server running Open Zaak. Useful to see if client requests actually reach the ‘backend’
webserver logs - Open Zaak uses a complex set-up to serve user-uploaded files securily. Things can go wrong here too.
container logs - the docker container itself may have problems that need to be investigated.
This guide walks you through the various options on how to access and configure the logs.
Sentry focuses on tracking down errors in software, i.e. the Open Zaak application. We strongly recommend setting up this integration.
Open Zaak has support to itegrate with Sentry error monitoring. Whenever a bug occurs in Open Zaak, the client will receive an error response and the technical details of the error are sent to the Sentry project, with context.
Sentry integration makes sure to strip sensitive context from technical details. Passwords and/or other credentials are not sent to Sentry, if they happen to be in the request context.
For documentation on how to set up a project in Sentry, please refer to the official documentation (make sure to follow the instructions for the platform Python > Django).
After setting up the project, you will receive a DSN, which is the URL to which exceptions will be sent (e.g. https://firstname.lastname@example.org/104).
The created Sentry project can be linked to Open Zaak by setting the environment
SENTRY_DSN equal to this DSN.
Viewing nginx logs¶
Nginx is the webserver sitting between the client and the Open Zaak backend. It mostly proxies requests to the backend, but it takes care of serving Documenten API files after the authorization checks are performed.
Many Kubernetes providers provide a graphical interface to view logs, for example on Google Cloud:
Navigate to your Kubernetes cluster
Via Workloads, find the deployment nginx
Find and click the Container logs link
Or, via the CLI tool
# for convenience, set up the k8s namespace [user@host]$ kubectl config set-context --current --namespace=openzaak-test # fetch the logs [user@host]$ kubectl logs -l app.kubernetes.io/name=nginx # or for a single pod: [user@host]$ kubectl get pods NAME READY STATUS RESTARTS AGE cache-79455b996-62llx 1/1 Running 0 68d nginx-8579d9dfbd-8dn5m 1/1 Running 0 7h3m nginx-8579d9dfbd-h4tc4 1/1 Running 0 7h3m openzaak-59df44f556-7znvg 1/1 Running 0 7h2m openzaak-59df44f556-gb4lq 1/1 Running 0 7h3m openzaak-59df44f556-nqtr2 1/1 Running 0 7h3m [user@host]$ kubectl logs --since=24h nginx-8579d9dfbd-8dn5m
On a VMWare appliance or single-server¶
On a single-server setup, nginx is not containerized and the log files can be found in
/var/log/nginx/error.logcontains errors encountered by nginx
/var/log/nginx/access.logis the access log of all the client requests
Application server, application and container logs¶
The application server, the application and the container itself write logs (together they make up the ‘backend’).
When the container starts up, it performs some checks before it proceeds with the application server startup.
Then, the application server starts up and writes some status information.
Every request that is received by the application server is logged as well - this is the access log.
Finally, the application itself detects potential problematic situations or writes other informational messages to the logging output.
All of these logs are logged to the container logs.
Viewing the container logs on Kubernetes¶
As with the nginx logs on Kubernetes, you can make use of your provider’s graphical interface if available.
Otherwise, you can use the CLI tool:
# for convenience, set up the k8s namespace [user@host]$ kubectl config set-context --current --namespace=openzaak-test # fetch the logs [user@host]$ kubectl logs -l app.kubernetes.io/name=django # or for a single pod: [user@host]$ kubectl get pods NAME READY STATUS RESTARTS AGE cache-79455b996-62llx 1/1 Running 0 68d nginx-8579d9dfbd-8dn5m 1/1 Running 0 7h3m nginx-8579d9dfbd-h4tc4 1/1 Running 0 7h3m openzaak-59df44f556-7znvg 1/1 Running 0 7h2m openzaak-59df44f556-gb4lq 1/1 Running 0 7h3m openzaak-59df44f556-nqtr2 1/1 Running 0 7h3m [user@host]$ kubectl logs --since=24h openzaak-59df44f556-gb4lq
On a VMWare appliance or single-server¶
Unfortunately, docker does not seem to be able to aggregate logs from different containers. This means that if you are running multiple replicas of Open Zaak (which is the default), you may have to dig around a bit before you find what you are looking for.
To view the logs of a particular replica:
# first replica [root@server]# docker logs openzaak-0 # second replica [root@server]# docker logs openzaak-1
Check the Docker documentation for more information about logs in Docker.
Customizing the log output¶
Logging to file instead¶
By default, we configure Open Zaak to log to stdout in containers by setting the
You may wish to log to files instead, by using persistent volumes. If you decide to do this, then:
Make sure to mount the volume on
/app/log- this is where log files are written to.
When multiple replicas are used, the volume must be
Set the environment variable
LOG_STDOUT=0to fall back to file-based logging.
Log files are by default rotated - once a log file reaches 10MB, a new file is created and once 10 files exist, the oldest is deleted.