Configure the Azure Diagnostic Extension for Storing Linux Log Files
by Rackspace Technology Staff
Introduction
A colleague of mine was trying to figure out a cheap and simple way to store log files from their application and have the functionality to search through it. The first thing that came to mind was using an Azure® monitor to read the logs, but another option that most people forget is the Azure Linux Diagnostic Extension. This extension can collect metrics from the virtual machine (VM), read log events from the Syslog, customize collected data metrics, collect specific log files that you can store in a storage table, and send metrics and log events to EventHub endpoints. The Azure portal lets the end-user configure all the preceding settings except collecting specific log files. Let me show you the steps required and a gotcha that sent me on a troubleshooting mission.
Configuration
Let's use the following code to create a simple Linux® VM, install Nginx®;, open up port `80`, and create a storage account to store our logs in a table:
$rgName = 'jrlinux2'
$vmName = 'jrlinux2'
$stgName = 'jrladtest2'
$location = 'eastus'
$vmPassw0rd = 'azuremyp@ssw0rd!'
az group create --name $vmName --location $location
$vm = az vm create `
--resource-group $rgName `
--name $vmName `
--image UbuntuLTS `
--admin-username jrudley `
--admin-password $vmPassw0rd
#install nginx
az vm run-command invoke -g $rgName -n $vmName --command-id RunShellScript --scripts "sudo apt-get update && sudo apt-get install -y nginx"
#open up nsg
az vm open-port --port 80 --resource-group $rgName --name $vmName
#create a storage account for log table storage
az storage account create -n $stgName -g $rgName -l $location --sku Standard_LRS
To configure what log file to store in an Azure table, you need to push two JSON files to the VM. Download the PrivateConfig.json and PublicSettings.json from my repo here. Open up the Public Settings.json file and add your storage account name and ResourceID of the VM that you created. To quickly get the VM ResourceID, run the following command:
($vm | ConvertFrom-Json).id
Use the following command to deploy the Linux diagnostic extension into the VM:
az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group $rgName --vm-name $vmName --protected-settings .\PrivateConfig.json --settings .\PublicSettings.json
To generate traffic to populate the Nginx log file, run the following command:
curl "http://$(($vm | ConvertFrom-Json).publicIpAddress)"
The gotcha!
At this point, I expected the diagnostic agent to tail the log entries and create an Azure storage table that we configured in the JSON files. I waited for 15 minutes, and nothing happened. I reviewed the log directory at /var/log/azure/Microsoft.Azure.Diagnostics.LinuxDiagnostic/, and everything was looking good. I saw the log file path I set and that everything successfully started. After poking around, I found this path /var/opt/microsoft/omsagent/LAD/log/omsagent.log and noticed this error:
#2020-07-10 21:20:44 +0000 [error]: Permission denied @ rb_sysopen - /var/log/nginx/access.log***
I opened a support case to Microsoft because I thought the Agent ran under `root`, but I needed to use `chmod` on the log file to give additional permissions. In my support case, Microsoft mentioned they plan to add more documentation on this step.
az vm run-command invoke -g $rgName -n $vmName --command-id RunShellScript --scripts "sudo chmod o+r /var/log/nginx/access.log"
I used curl against the Nginx endpoint again to generate new log entries and noticed in the omsagent.log file that I now have an INFO message **2020-07-10 21:50:04 +0000 [info]: following tail of /var/log/nginx/access.log.
In the Azure table storage, Azure automatically created a table and populated new entries successfully
Recent Posts
Patrones de redes híbridas de Google Cloud - Parte 2
Octubre 16th, 2024
Patrones de redes híbridas de Google Cloud - Parte 2
Octubre 15th, 2024
Cómo aprovecha Rackspace AWS Systems Manager
Octubre 9th, 2024
Windows Server impide la sincronización horaria con Rackspace NTP
Octubre 3rd, 2024