Description
We are running dotnet-monitor alongside a .NET application in Kubernetes.
We have configured dump file collection such that the dumps are first stored in a temporary location (/diag folder) and then uploaded to an Azure Storage account.
However, the memory dump files collected by dotnet-monitor are not being cleaned up from the /diag folder after being uploaded.
Over time, these files accumulate and consume significant disk space, causing:
- Node disk pressure
- Pod evictions
- Potential application instability
Observed Disk Usage
Below are examples of disk usage inside different pods:
kubectl exec -it dotnet-app-XXXXX -- du -sh /diag
9.1G /diag
kubectl exec -it dotnet-app-XXXXX -- du -sh /diag
32G /diag
kubectl exec -it dotnet-app-XXXXX -- du -sh /diag
30G /diag
Impact
- Dump files continuously accumulate
- No automatic cleanup observed
- Node disk space gets exhausted
- Kubernetes evicts pods due to disk pressure
Expected Behavior
Dump files should either:
- Be automatically cleaned up after collection, or
- Support a configurable retention/size limit to prevent disk exhaustion.
Description
We are running dotnet-monitor alongside a .NET application in Kubernetes.
We have configured dump file collection such that the dumps are first stored in a temporary location (/diag folder) and then uploaded to an Azure Storage account.
However, the memory dump files collected by dotnet-monitor are not being cleaned up from the /diag folder after being uploaded.
Over time, these files accumulate and consume significant disk space, causing:
Observed Disk Usage
Below are examples of disk usage inside different pods:
Impact
Expected Behavior
Dump files should either: