Log Gathering & Application Metrics
Last updated
Last updated
This section presents what is logged in the WorkflowGen image and how. It also shows pointers to configure centralized logging with some services and orchestrators.
WorkflowGen doesn't come with metrics generation out-of-the-box. Nevertheless, you can configure some existing software or services with WorkflowGen's container in order to get more valuable metrics for monitoring performance and traffic.
When deploying WorkflowGen, many applications provide logs through different streams. This table lists the different log sources and their streams:
As you can see, there are many log sources and not all are handled directly by the container. Some of them need to be managed manually.
These logs are written in files in the App_Data
folder, which is exposed through a volume. For more information about exposed volumes, see the File Management section.
In order to centralize these logs in a logging service or system, you need to develop your own script or application that will periodically read the logs in the App_Data
volume and push them to the service. Concretely, this is done by developing a Singleton container that has access to the WorkflowGen data volume. In Kubernetes, you would deploy this container inside a pod that has only one replica to avoid problems with concurrency.
These logs are written to the Windows Events API and are retrieved by the WorkflowGen container only when the start mode is win_services
, dir_sync
, or engine
. In any other case, it's the IIS logs that are redirected to the standard output and the Windows Services logs are discarded.
To get all the logs from WorkflowGen, you should use the architecture that separates each start mode into its own container. For more information, see the Recommended Architectures section.
IIS logs are written in multiple files inside the WorkflowGen container. You don't have to manually manage them. The container reads the logs and directs them to the standard output of the container. The standard output of each container is picked up by the Docker logging system.
There are many Docker logging drivers to indicate to Docker where to put these logs. By default, they are written on the host in JSON format at the C:\ProgramData\Docker\logs
path. The rest of this section provides more information about the Docker logging system and how to use it to centralize logs.
In Kubernetes, you can configure your own logging agent or use the default one provided. For more details about logging in Kubernetes, see its Logging Architecture page. Many cloud providers offer a centralized service to visualize logs from pods in their Kubernetes service. Refer to your cloud provider's documentation for details.
These logs are disabled by default and can be enabled by setting the following environment variables for the WorkflowGen container:
It's recommended to activate these logs in a development environment only because they can be exposed through HTTP.
As shown, the logging is done on a per application and per container basis. To get these logs from the container, you have to use volumes and a custom application to centralize the logs, or build your own image based on the WorkflowGen image that will redirect those logs in some way to the container's standard output. For more information about making a custom WorkflowGen image, see the Custom Image section.
In order to centralize logs outside of a Docker host, you first need to choose a logging system or service. Once this is done, you need to use your orchestrator's specific method to link the service to the logs.
Kubernetes has a similar functionality available to manage logs for containers. See the following link for more information:
You can also use Azure Monitor with your Kubernetes Cluster. See the following link for more information:
The Azure Kubernetes Service documentation recommends using Azure Monitor for containers. See its documentation page for more details:
This software can help you reduce costs related to monitoring applications using a cloud service. Prometheus can be deployed as a container inside your cluster alongside WorkflowGen. You'll need to add a utility to the WorkflowGen container, so you'll need to make a custom WorkflowGen image. For more information about making a custom WorkflowGen image, see the Custom Image section.
Here are some links to the Prometheus documentation and Dockerfile examples to help you get started with application metrics with Prometheus:
Grafana can help you visualize metrics coming from the cluster. It also has a paid cloud service. Here are some links to get you started:
If you prefer to gather only IIS logs directly from the container, Azure Application Insights provides a PowerShell tool called Application Insights Agent that's installed directly in the container. See Deploy Azure Monitor Application Insights Agent for on-premises servers for more information. You'll need to create your own image that incorporates this tool inside the container. For more information about custom WorkflowGen images, see the Custom Image section.
This approach should be considered only if your Kubernetes cluster is not managed by Azure Kubernetes Service and you want Application Insights to gather the logs. Otherwise, use the default logging agent or something like an ELK stack (ElasticSearch, Logstash and Kibana) or EFK stack (ElasticSearch, fluentd, and Kibana).
Source
Streams
WorkflowGen web applications
Logs written in the App_Data
folder
WorkflowGen Windows Services
Logs written through the Windows Events API
Internet Information Services (IIS)
Logs written in files in the container
iisnode
Logs written in multiple files in each Node.js application's folder inside the container