Creating custom Azure alerts from Log Analytics: parsing the alert in Azure Automation | Quisitive

This blog post will show the next step in how to create a custom alert format using a combination of Kusto and Azure Automation. In the first part of this blog post series, I introduced the steps I am utilizing to provide what I refer to as a “human readable alert”. This blog post series is to provide another option to the default functionality available in Azure Monitor. Relevant blog posts:

As a reminder, the solution we are using is built from four major components: a custom Kusto query, alert notification, Azure Automation, and LogicApps (or Azure Automation with SendGrid). To get to the script below, we are using Azure Monitor with a Kusto query that uses an action group to send a webhook to a runbook in Azure Automation. This blog post will show the process to create the Azure Automation script, how to create the webhook and how to integrate the webhook into an action group.

Creating the Azure Automation script:

The script below takes an incoming webhook, parses it for relevant information, formats it to JSON and calls a webhook that will email the reformatted content.

<#

#.SYNOPSIS

#Take an alert passed to this script via a webhook and convert it from JSON to a formatted email structure

#>

param (

[Parameter(Mandatory=$false)]

[object] $WebhookData

)

 

# If runbook was called from Webhook, WebhookData will not be null.

if ($WebhookData) {

write-output "Webhook data $WebhookData"

 

# Logic to allow for testing in Test Pane

 

if(-Not $WebhookData.RequestBody)

{

$WebhookData = (ConvertFrom-Json -InputObject $WebhookData)

write-output "test Webhook data $webhookData"

}

}

 

function CreateObjectView {

param(

$data

)

 

# Find the number of entries we'll need in this array

$count = 0

foreach ($table in $data.Tables) {

$count += $table.Rows.Count

}

 

$objectView = New-Object object[] $count

$i = 0;

foreach ($table in $data.Tables) {

foreach ($row in $table.Rows) {

# Create a dictionary of properties

$properties = @{}

for ($columnNum = 0; $columnNum -lt $table.Columns.Count; $columnNum++) {

if ([string]::IsNullOrEmpty($row[$columnNum])) {

$properties[$table.Columns[$columnNum].name] = $null

}

else {

$properties[$table.Columns[$columnNum].name] = $row[$columnNum]

}

}

$objectView[$i] = (New-Object PSObject -Property $properties)

$null = $i++

}

}

 

$objectView

}

$Spacer = ": "

$linespacer = "</p><p>"

$RequestBody = ConvertFrom-JSON -InputObject $WebhookData.RequestBody

 

if($RequestBody.SearchResult -eq $null){

$RequestBody = $RequestBody.data

}

 

# Get all metadata properties

 

$WebhookName = $WebhookData.WebhookName

write-output "Webhookname is: $($WebhookName | Out-String)"

 

$AlertId = $RequestBody.essentials.alertId

write-output "AlertId is: $($AlertId | Out-String)"

 

$AlertRule = $RequestBody.essentials.alertRule

write-output "AlertRule is: $($AlertRule | Out-String)"

 

$QueryResults = $RequestBody.alertContext.condition.allOf.linkToSearchResultsUI

write-output "QueryResults is: $($QueryResults | Out-String)"

 

# Connect to Azure

Connect-AzAccount -identity

set-azcontext -subscriptionid "<subscription>"

 

$AccessToken = Get-AzAccessToken -ResourceUrl 'https://api.loganalytics.io'

$data = Invoke-RestMethod -Uri $RequestBody.alertContext.condition.allOf.linkToSearchResultsAPI -Headers @{ Authorization = "Bearer " + $AccessToken.Token }

 

write-output "Information found from the linkToSearchResultsAPI $($data)"

 

$SearchResults = CreateObjectView $data

 

write-output "SearchResult is: $($SearchResults | Out-String)"

 

# Get detailed search results

foreach ($Result in $SearchResults)

{

write-output "In search results"

 

$Body = $Result.Body

write-output "Result.body is $($Body)"

#subject

#notificationemail

 

$Body = $Result.Body+"<p>Query: <a href=<code>""+$QueryResults+"</code>">Link</a>"

$Subject = $Result.Subject

$NotificationEmail = $Result.NotificationEmail

$ResourceId = $Result._ResourceId

 

$params = @{"To"="$NotificationEmail";"Subject"="$Client $AlertRuleName for ticket $TicketID – $Source";"Body"=$message}

 

write-output "Subject is: $($Subject | Out-String)"

write-output "Body is: $($Body | Out-String)"

write-output "ResourceId is: $($ResourceId | Out-String)"

write-output "NotificationEmail is: $($NotificationEmail | Out-String)"

 

$params = @{"To"="$NotificationEmail";"Subject"="$Subject";"Body"=$Body;"From"="$NotificationEmail"}

$json = ConvertTo-Json $params

 

write-output "json value is $($json)"

 

#       Call the LogicApp that will send the email

$uri = <URLToLogicApp>

Invoke-RestMethod -Uri $uri -Method Post -Body $($json)

}

Where to create the webhook for the Azure Automation runbook:

Once the runbook has been saved and published in Azure Automation there is an option at the top to “add webhook” as shown in Figure 1 below.

Figure 1: Adding a webhook to a runbook

Adding a webhook to a runbook

Adding a webhook to a runbook

Use the “Create new webhook” as shown in Figure 2.

Figure 2: Creating a new webhook

Creating a new webhook

Creating a new webhook

Give the new webhook a name, make sure it is enabled, set the expiration date, and copy out the URL from the webhook as shown in Figure 3.

Figure 3: Creating a new webhook – specifying the name and expiration

Creating a new webhook – specifying the name and expiration

Creating a new webhook – specifying the name and expiration

Finish the steps to create the webhook. Now that we have the webhook we can add it into the appropriate action group.

How to integrate the webhook for the Azure Automation runbook into an action group:

Now that we have the webhook to call the runbook that we have created, we can now add the integration for the alert in Azure Monitor. The integration requires a rule, an action group, and configuration for the action group to use the webhook. The details on the rule are explained in this previous blog post. To create an action group, open Monitor and open “Action groups” shown below in Figure 4.

Figure 4: Creating an action group

Creating an action group

Creating an action group

Choose the option to create an action group and then specify the configurations required (shown in Figure 5) for the subscription, resource group, action group name and display name.

Figure 5: Configuring an action group

Configuring an action group

Configuring an action group

Choose the action type of Webhook and give it a name and the URL gathered in the previous section of this blog post (shown in Figure 6). Please note, there are other (potentially better) options to choose from the Action type list that may be better options (Automation Runbook or Secure runbook).

Figure 6: Configuring an action group actions

Configuring an action groups actions

Configuring an action groups actions

To complete the integration between the alert and the call to the runbook, assign the Action group to the alert.

Summary: This blog post shows how to create a runbook that will take an existing Kusto generated alert and reformat it into a human-readable format. This is done through leveraging Azure Monitor, alert rules, action groups, Azure Automation and PowerShell called through a webhook. The next blog post will explain how to use a Logic App to parse the JSON provided by the Azure Automation script provided in this blog post.

Solutions were a part of OMS and available within Log Analytics that provided pre-built visualizations for data in Log Analytics. These were moved into Azure Monitor a while back (Monitoring solutions in Azure Monitor – Azure Monitor | Microsoft Docs). Since that point in time, Microsoft has shifted focus towards Insights and Workbooks. It appears that solutions are on the way to being deprecated as part of this directional shift. There are a lot of good solutions that currently exist, so how do we move these solutions into a workbook?

It would be nice if there was a way to right-click on a workbook and export it or automatically convert it to a workbook. There is however no such functionality available now. This blog post shows the process that I used to convert an existing solution into a workbook using the “DNS Analytics (preview)” solution as an example. For this blog post, I am working from this assumption: if there is currently no data in a view that the view does not need to be translated into the new workbook.

Screenshots

Start with screenshots of each of the views currently available in the solution. While we may not be able to exactly match these visualizations, having pre-populated versions available for reference is invaluable when re-creating visualizations. Below are the screenshots from the DNS Analytics solution.

Overview tile:

DNS Analytics overview pane
DNS Analytics overview pane

Drilling into the overview tile:

Results of drilling into the overview pane
Results of drilling into the overview pane

Get to the queries

In the examples above you can drill into the underlying queries by using the “See all…” button or clicking on a specific record shown in the view. Below are some of the queries which were found drilling in this way with names added to the top of the query.

DNS security

DnsEvents

| where SubType == ‘LookupQuery’ and isnotempty(MaliciousIP)

| summarize Attempts = count() by ClientIP

Domains Queried

DnsEvents

| where SubType == ‘LookupQuery’

| summarize Count = count() by Name

DNS Clients

DnsEvents

| where SubType == ‘LookupQuery’

| summarize QueryCount = count() by ClientIP

| where QueryCount > 1000

Dynamic DNS Registration

DnsEvents

| where SubType == ‘DynamicRegistration’ and Result =~ ‘Failure’ and isnotempty(IPAddresses)

| summarize FailureCount = count() by Name, IPAddresses

Name Registration Queries

DnsEvents

| where SubType == ‘DynamicRegistration’

| extend failureCount = iif(Result == ‘Failure’, 1, 0)

| summarize sum(failureCount), totalCount = count() by ClientIP

List of DNS Servers

DnsInventory

| where SubType == ‘Server’

| project Computer, DomainName, ForestName, ServerIPs

List of DNS Zones

DnsInventory

| where SubType == ‘Zone’

| project ZoneName, DynamicUpdate, NameServers, DnsSecSigned

Unused Resource Records

DnsInventory

| where SubType == ‘ResourceRecord’ and ResourceRecordName !in ((DnsEvents

    | where SubType == ‘LookupQuery’

    | distinct Name))

DNS Servers Query Load

Perf

| where CounterName == ‘Total Query Received/sec’

| summarize AggregatedValue = count()  by bin(TimeGenerated, 1h)

| render timechart

DNS Zones Query Load

Perf

| where ObjectName == ‘DDIZone’

| summarize AggregatedValue = count()  by bin(TimeGenerated, 1h), CounterName

| render timechart

Configuration Events

DnsEvents

| where SubType == ‘ConfigurationChange’

| project EventId, TimeGenerated, Computer, TaskCategory, Message

DNS Analytical Log

DnsEvents

| where SubType == ‘LookupQuery’

Building the workbook

From the Log Analytics workspace, create a new workbook (“DNS Analytics” in this example). Add the query as shown below and run the query.

DNS Security query
DNS Security query

For each workbook section, use the name for the step name, and chart title and match the value for the no data message to what was shown in the solution (No data found in this example).

DNS security configuration
DNS security configuration

Lather, rinse and repeat for each of the queries identified above. For each visualization, steps may be required to configure a chart to represent the data and/or to configure specific ways to visualize that data.

As an example, I ended up tweaking one of the queries to show success and failures for DNS queries to this result.

DnsEvents

where SubType == ‘DynamicRegistration’

extend failureCount = iif(Result == ‘Failure’10)

extend successCount = iif(Result == ‘Success’10)

project Failure = failureCount, Success = successCount, ClientIP, bin(TimeGenerated,1h)

To generate this result:

Finishing up the workbook

After determining where the various workbook components need to go, what size to display them, and such we end up with a workbook such as the one shown below.

DNS Clients workbook
DNS Clients workbook
DNS query success and failure
DNS query success and failure
workbook part 2
DNS Analytics workbook part 2

Pin to a dashboard

The various workbook visualizations can then be pinned onto an Azure dashboard as shown below:

DNS analytics dashboard
DNS analytics as a dashboard

Summary: While there is not an automated process to re-create solutions in workbooks and dashboards, the process used in the blog post was pretty straightforward and the results appear to be very similar to what was provided by the original solution.

Recently I was working with a customer who was interested in developing alert queries similar to those built-in with System Center Operations Manager (SCOM). We started with the development of a low disk space condition. For those not familiar with how SCOM handles alerting for low disk space conditions check out Kevin’s blog post: How Logical Disk free space monitoring works in SCOM – Kevin Holman’s Blog.

The query that I developed uses both % free space and free megabytes to alert when both conditions have reached the appropriate thresholds. This query was designed to work both as a warning alert and a critical alert depending on the values that you provide at the start of the query. For a critical alert I use these values:

let FreeMbMin = 0;

let FreeMbMax = 1000;

let FreePercentMin = 0;

let FreePercentMax = 5;

let Severity = “Critical”;

For a warning alert I use these values:

let FreeMbMin = 1000;

let FreeMbMax = 2000;

let FreePercentMin = 5;

let FreePercentMax = 10;

let Severity = “Warning”;

This approach is fully customizable as you can choose the thresholds for both conditions, or could even add another condition to test for (Critical, Warning, Proactive?)

The matrix I am using for free disk space health is shown below:

Disk health matrix
Disk health matrix

An example matrix of these conditions is shown below:

The query is available below:

let FreeMbMin = 0;
let FreeMbMax = 1000;
let FreePercentMin = 0;
let FreePercentMax = 5;
let Severity = “Critical”;
let LastCounterPercent = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘% Free Space’
and CounterValue >= FreePercentMin
and CounterValue < FreePercentMax
and InstanceName != “_Total”
| summarize TimeGenerated = max(TimeGenerated) by Computer, InstanceName;
let CounterValuePercent = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘% Free Space’
and CounterValue >= FreePercentMin
and CounterValue < FreePercentMax
and InstanceName != “_Total”
| summarize by Computer, InstanceName, CounterValue, TimeGenerated, _ResourceId
| extend summaryPercent = strcat(_ResourceId, Severity, ” Low Disk Space “, Computer, ” on disk “, InstanceName, ” value of “, toint(CounterValue), “% free disk space threshold is “,FreePercentMin, ” to “, FreePercentMax);
let LastCounterMb = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘Free Megabytes’
and CounterValue >= FreeMbMin
and CounterValue < FreeMbMax
and InstanceName != “_Total” and InstanceName !contains “HarddiskVolume”
| summarize TimeGenerated = max(TimeGenerated) by Computer, InstanceName;
let CounterValueMb = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘Free Megabytes’
and CounterValue >= FreeMbMin
and CounterValue < FreeMbMax
and InstanceName != “_Total” and InstanceName !contains “HarddiskVolume”
| summarize by Computer, InstanceName, CounterValue, TimeGenerated, _ResourceId
| extend summaryMb = strcat(_ResourceId, Severity, ” Low Disk Space “, Computer, ” on disk “, InstanceName, ” value of “, toint(CounterValue), ” free Megabytes threhold is “, FreeMbMin, ” to “, FreeMbMax);
let CounterMv = CounterValueMb
| join LastCounterMb on TimeGenerated;
let CounterPercent = CounterValuePercent
| join LastCounterPercent on TimeGenerated;
CounterPercent | join CounterMv on Computer, InstanceName
| project Summary1=summaryMb, Summary2=summaryPercent, Computer, InstanceName, FreePercent = CounterValue, TimeGenerated, FreeMb = CounterValue1

Sample output shown below:

Summary1Summary2ComputerInstanceNameFreePercentTimeGenerated [UTC]FreeMb
Critical Low Disk Space xyz.abc.com on disk C: value of 956 free Megabytes threshold is 0 to 1000Critical Low Disk Space xyz.abc.com on disk C: value of 2% free disk space threshold is 0 to 5xyz.abc.comC:2.0902647/19/2021, 7:48:06.300 PM956

One of the challenges that we have seen while working with Teams or other video conferencing platforms is a general slowdown of meeting audio when multiple videos are shared such as in a classroom environment or a large company meeting. During most of the day, this isn’t a problem, but as we hit certain points of the day (afternoon most often) there is a significant slowdown that can occur. This blog post will focus on debugging conference call latency issues in Microsoft Teams, but these issues will occur on any video conferencing software platform.

An important part of this to realize is that performance for video sharing can be impacted due to several underly causing conference call latency:

Additionally, it can also be a combination of any of the above which is occurring. To answer a simple question like “Why is my conference call latency poor, causing issues such as choppy audio, or problems seeing the screen share?” isn’t really that simple.

The graphic below shows a simplified version of how each of the attendees connects to a video conference. In most cases, there is someone running the meeting who we will refer to as a presenter (the teacher in a classroom setting). There are also several attendees (the students in the classroom setting). Each of these attendees is connecting to the internet through some manner (cable, fibre optic, ADSL, etc.) which are represented by the lines between the presenter and attendees to the internet. The connectivity is likely to different Internet Service Providers but we will simplify this to show that they are all connecting to the internet somehow. From their internet connection, they are each communicating with the video conference application (Teams in this blog post example).

Graphic 1: How people connect to a video conference when all is working well

Connecting to video conference services to avoid conference call latency

There are a lot of parts that must work to make this whole process work. The presenter and attendees all need to have functional internet connectivity, and the video conference application must be functional and performing effectively. If any problems occur in the diagram below, there will be problems in the video conference.  As an example, if one attendee has a slow internet connection, it will impact their ability to see the video conference (including what is shared on the screen, audio from the presenter, etc.). The slow link is shown in graphic 2 below by changing the color of the link between the attendee and the internet to yellow.

Graphic 2: How people connect to a video conference when one attendee has a slow internet connection

One student connection to avoid conference call latency issue

If the person who is presenting (or teaching) has a slow internet connection it will impact all the attendee’s (students) ability to see what they are sharing on the screen as well as audio and video from the presenter. This is represented in graphic 3 by the yellow line between the presenter and the internet.

Graphic 3: How people connect to a video conference when the presenter has a slow internet connection

Presenter connection issue

If internet service providers are experiencing a slowdown (most likely due to additional network traffic occurring during this outbreak), this will impact all of the attendees of the video conference as shown in graphic 4.

Graphic 4: How people connect to a video conference when the internet service providers or connections are slow

Internet connection issue

Finally, if there is an issue with the underlying video-conferencing application this will also impact all attendees of the video, causing conference call latency issues as shown in graphic 5.

Graphic 5: How people connect to a video conference when the video conference application is slow

Conference call latency: Video conference service issue

How to debug problems during video conferences

The above graphics should show that there are many different things which can cause a problem during a video conference. So how can we debug this situation?

Common issues & resolutions:

Tips & Tricks:

Feedback from a colleague on this blog post

I sent this blog post to David B, who had the following thoughts for consideration (this has been consolidated to specific bullet points):

Configuring an email notification for service incidents

On the Microsoft 365 admin center under preferences, you can set up an email notification if there are health service issues for the services that you are interested in. To set this up open the Service health view and click on Preferences (highlight below).

Conference call latency: Service health preferences

If you enable the checkbox which says “Send me service health notifications in email” you can specify whether to include incidents (which we are looking for in this case).

Preferences - part 1

You can choose what specific services you want to be notified about (Microsoft Teams and SharePoint Online in this example).

Preferences - part 2

This notification should be sent to your technical contact at your organization or to the most technical person in your organization so they can determine if this incident will impact your organization.

Configuring a Teams site to test connectivity

You can create a Teams site which has different pages which will help with debugging connectivity issues. For this Teams site, you can add a webpage that points to one location to check your internet connection and a second webpage that checks your connectivity to Office 365. These provide a quick way to debug what could be causing conference call latency and communication issues.

To configure this, I created a new Team called “Teams Status”. On this team, I used the + sign to add a new tab.

Adding tab in Teams

I created two tabs, one called “Internet Connectivity Test” and one called “Teams Connectivity Test”. For each of these, I added them as a website from the options shown below.

Add a tab - options

For this new tab, you just need to type in the name of the website and add the URL you want it to go to.

Adding a website

Below are screenshots from my two websites that are available directly in Teams so it’s easier to track down what may be causing issues.

If you show more information, it gives more details which can help with debugging connectivity from your location. The URL I added was: https://fast.com/. In the example below, we can see that my internet speed is 32 Mbps, unloaded connections are at 14 ms and loaded connections are at 595 ms. Unloaded latency is how long it takes to connect when there is not much load through your link to the ISP. Loaded latency is how long it takes to connect when there is a load on the link to your ISP.

ISP connection speed

The Teams Connectivity Test checks the load time to bring up https://outlook.office365.com/. The URL I added was: https://tools.pingdom.com/#5c486c4d70400000. In the example below, we can see that the load time is 365 ms.

Conference call latency connectivity test to O365

Additional reference:

Summary: Understanding how video conferencing systems work from a high level can help you to debug problems and work around them more quickly. Hopefully, this blog post has given you a quick crash course and has given some tips which will help to make your meetings (or classes) continue to go on without a hitch and avoid conference call latency!

Welcome to the “Introducing” series (check here for the full list of blog posts in this series). In the previous two blog posts, we introduced Azure and what services it provides, and then we introduced certifications for Azure and how to get started with Azure. In this blog post, we will introduce the structure of Azure.

Azure’s structure:

As a quick recap for Azure, it is a cloud-based computing service that provides IaaS, PaaS and SaaS solutions (see this article for details). Benefits to cloud computing are included in this previous blog post. Below is the structure which Azure uses and an explanation of the terms which are used to build out that structure.

Structure of Azure

Thank you to Chad S and Beth F for their help on this blog post!

Additional resources:

Series Navigation:

As part of the process to simplify the logic of my Logic App, one of the steps was to start to convert existing Logic Apps into nested Logic App’s so that they can be called as a module which performs a specific purpose. In this case I am converting the Logic App which I wrote to query Nest into a nested Logic App. Below are the steps required:

First, we change the start of the existing Logic App, then we change the end of the existing Logic App, and then we call the new nested Logic App. Details on these steps are below:

Change the start of the existing Logic App:

Remove the existing start of the Logic App (scheduled execution in my example) and replace it with Manual – When an HTTP request is received”.

http request

Change the end of the existing Logic App:

Then add Response as the end of the Logic App in this case passing back the body from the request which was done to the Nest API.

response 200

The updated version of the original Logic App is shown below:

http request

Add the new nested Logic App:

Once it has saved it will generate the URL which will be specific to the nested Logic App. After this the new nested Logic App then appears as an option for Azure Logic Apps. In the example below I created a new Logic App which was now able to call an action to call the nested Logic App as shown below:

recurrence

Reference links:

Summary: To convert an existing Logic App into a Nested logic app – just remove the scheduled execution and replace it with “Manual – When an HTP request is received”, add Response as the end of the Logic App and save the updated Logic App.

This blog post series will cover two approaches which can be used to help to customize how alerts are formatted when they come from Azure Monitor for Log Analytics queries. For this first blog post we will take a simple approach to making these alerts more useful – cleaning up the underlying query.

Log analytics

In Azure Monitor, you can define alerts based on a query from Log Analytics. For details on this process to add an alert see this blog post. This blog post will focus on how you can clean up the query results to make a cleaner alert.

Cleaning up query results

For today’s blog item we’ll start with what I would have used normally as a query for an alert:

Perf

| where (ObjectName == “Processor”
or ObjectName == “System”) and CounterName == “% Processor Time”

| where CounterValue > 40

| sort
by TimeGenerated desc

The query provides all fields because we aren’t restricting what to return.

Log analytics

This will generate the alert as expected, and here’s the resulting email of that alert.

Log analytics

One of the Microsoft folks pointed out to me that if I cleaned up my query it would clean up the results and therefore it would clean up the email which is being sent out (thank you Oleg!). We can take a first stab at cleaning this up by just restricting which fields we are returning with a project statement:

Perf


| where (ObjectName == "Processor"
or ObjectName == "System") and CounterName == "% Processor Time"


| where CounterValue > 40


| project Computer, CounterValue, TimeGenerated

Here’s the resulting email of the new alert:

Log analytics

For queries which are run directly in the portal, we can clean this up further by adding some extends which provide information on our various fields.

Perf


| where (ObjectName == "Processor"
or ObjectName == "System") and CounterName == "% Processor Time"


| where CounterValue > 40


| extend ComputerText = "Computer Name"


| extend CounterValueText = "% Processor Utilization"


| extend TimeGeneratedText = "Time Generated"


| project ComputerText, Computer, CounterValueText, CounterValue, TimeGeneratedText, TimeGenerated

A sample result is below:

Log analytics

This is the query that we will use for the actual email alert, but we’ll showcase one more example in case it’s helpful. We can even move this to more of a sentence format for alerts such as this:

Perf

| where (ObjectName == “Processor”
or ObjectName == “System”) and CounterName == “% Processor Time”

| where CounterValue > 40

| extend CounterValueText = “% Processor Utilization”

| extend Text1 = ” had a high “

| extend Text2 = ” of “

| extend Text3 = ” at “

| extend Text4 = “.”

| project Computer, Text1, CounterValueText, Text2, CounterValue, Text3, TimeGenerated, Text4

A sample result for this query is below:

Log analytics

If we compare the original alert to the new alert side by side shows how much this one simple change can make to clean up and make your alerts more useful: (Original on left, new on right)

Log analytics
Log analytics

The alert on the right-hand side helps to remove the clutter by decreasing the number of fields shown in the results section of the email. It also shortens the email by about a third making it easier to find what you are looking for as you can see based on the examples above.

Summary

If you want to make your current alerts more useful, use a project command to restrict the fields which are sent in the email (IE: Clean up the Log Analytics query). This is quick to put in place and results in a much more readable email alert.

P.S. The email above on the right is, however, a long way from my optimal email format, shown below:

Log analytics

The above approach to alerting from Log Analytics we will cover in the next blog post in this series! Would you like to know more? Get in touch with us here.