This blog post will show the next step in how to create a custom alert format using a combination of Kusto and Azure Automation. In the first part of this blog post series, I introduced the steps I am utilizing to provide what I refer to as a “human readable alert”. This blog post series is to provide another option to the default functionality available in Azure Monitor. Relevant blog posts:
- Explanations of the current alert format available in Azure Monitor
- Introducing the Kusto query and configuring alert notifications
- Receiving and processing the alert (discussed in this blog post).
- Emailing the alert (discussed in the next blog post)
As a reminder, the solution we are using is built from four major components: a custom Kusto query, alert notification, Azure Automation, and LogicApps (or Azure Automation with SendGrid). To get to the script below, we are using Azure Monitor with a Kusto query that uses an action group to send a webhook to a runbook in Azure Automation. This blog post will show the process to create the Azure Automation script, how to create the webhook and how to integrate the webhook into an action group.
Creating the Azure Automation script:
The script below takes an incoming webhook, parses it for relevant information, formats it to JSON and calls a webhook that will email the reformatted content.
<#
#.SYNOPSIS
#Take an alert passed to this script via a webhook and convert it from JSON to a formatted email structure
#>
param (
[Parameter(Mandatory=$false)]
[object] $WebhookData
)
# If runbook was called from Webhook, WebhookData will not be null.
if ($WebhookData) {
write-output "Webhook data $WebhookData"
# Logic to allow for testing in Test Pane
if(-Not $WebhookData.RequestBody)
{
$WebhookData = (ConvertFrom-Json -InputObject $WebhookData)
write-output "test Webhook data $webhookData"
}
}
function CreateObjectView {
param(
$data
)
# Find the number of entries we'll need in this array
$count = 0
foreach ($table in $data.Tables) {
$count += $table.Rows.Count
}
$objectView = New-Object object[] $count
$i = 0;
foreach ($table in $data.Tables) {
foreach ($row in $table.Rows) {
# Create a dictionary of properties
$properties = @{}
for ($columnNum = 0; $columnNum -lt $table.Columns.Count; $columnNum++) {
if ([string]::IsNullOrEmpty($row[$columnNum])) {
$properties[$table.Columns[$columnNum].name] = $null
}
else {
$properties[$table.Columns[$columnNum].name] = $row[$columnNum]
}
}
$objectView[$i] = (New-Object PSObject -Property $properties)
$null = $i++
}
}
$objectView
}
$Spacer = ": "
$linespacer = "</p><p>"
$RequestBody = ConvertFrom-JSON -InputObject $WebhookData.RequestBody
if($RequestBody.SearchResult -eq $null){
$RequestBody = $RequestBody.data
}
# Get all metadata properties
$WebhookName = $WebhookData.WebhookName
write-output "Webhookname is: $($WebhookName | Out-String)"
$AlertId = $RequestBody.essentials.alertId
write-output "AlertId is: $($AlertId | Out-String)"
$AlertRule = $RequestBody.essentials.alertRule
write-output "AlertRule is: $($AlertRule | Out-String)"
$QueryResults = $RequestBody.alertContext.condition.allOf.linkToSearchResultsUI
write-output "QueryResults is: $($QueryResults | Out-String)"
# Connect to Azure
Connect-AzAccount -identity
set-azcontext -subscriptionid "<subscription>"
$AccessToken = Get-AzAccessToken -ResourceUrl 'https://api.loganalytics.io'
$data = Invoke-RestMethod -Uri $RequestBody.alertContext.condition.allOf.linkToSearchResultsAPI -Headers @{ Authorization = "Bearer " + $AccessToken.Token }
write-output "Information found from the linkToSearchResultsAPI $($data)"
$SearchResults = CreateObjectView $data
write-output "SearchResult is: $($SearchResults | Out-String)"
# Get detailed search results
foreach ($Result in $SearchResults)
{
write-output "In search results"
$Body = $Result.Body
write-output "Result.body is $($Body)"
#subject
#notificationemail
$Body = $Result.Body+"<p>Query: <a href=<code>""+$QueryResults+"</code>">Link</a>"
$Subject = $Result.Subject
$NotificationEmail = $Result.NotificationEmail
$ResourceId = $Result._ResourceId
$params = @{"To"="$NotificationEmail";"Subject"="$Client $AlertRuleName for ticket $TicketID – $Source";"Body"=$message}
write-output "Subject is: $($Subject | Out-String)"
write-output "Body is: $($Body | Out-String)"
write-output "ResourceId is: $($ResourceId | Out-String)"
write-output "NotificationEmail is: $($NotificationEmail | Out-String)"
$params = @{"To"="$NotificationEmail";"Subject"="$Subject";"Body"=$Body;"From"="$NotificationEmail"}
$json = ConvertTo-Json $params
write-output "json value is $($json)"
# Call the LogicApp that will send the email
$uri = <URLToLogicApp>
Invoke-RestMethod -Uri $uri -Method Post -Body $($json)
}
Where to create the webhook for the Azure Automation runbook:
Once the runbook has been saved and published in Azure Automation there is an option at the top to “add webhook” as shown in Figure 1 below.
Figure 1: Adding a webhook to a runbook
Adding a webhook to a runbook
Use the “Create new webhook” as shown in Figure 2.
Figure 2: Creating a new webhook
Creating a new webhook
Give the new webhook a name, make sure it is enabled, set the expiration date, and copy out the URL from the webhook as shown in Figure 3.
Figure 3: Creating a new webhook – specifying the name and expiration
Creating a new webhook – specifying the name and expiration
Finish the steps to create the webhook. Now that we have the webhook we can add it into the appropriate action group.
How to integrate the webhook for the Azure Automation runbook into an action group:
Now that we have the webhook to call the runbook that we have created, we can now add the integration for the alert in Azure Monitor. The integration requires a rule, an action group, and configuration for the action group to use the webhook. The details on the rule are explained in this previous blog post. To create an action group, open Monitor and open “Action groups” shown below in Figure 4.
Figure 4: Creating an action group
Creating an action group
Choose the option to create an action group and then specify the configurations required (shown in Figure 5) for the subscription, resource group, action group name and display name.
Figure 5: Configuring an action group
Configuring an action group
Choose the action type of Webhook and give it a name and the URL gathered in the previous section of this blog post (shown in Figure 6). Please note, there are other (potentially better) options to choose from the Action type list that may be better options (Automation Runbook or Secure runbook).
Figure 6: Configuring an action group actions
Configuring an action groups actions
To complete the integration between the alert and the call to the runbook, assign the Action group to the alert.
Summary: This blog post shows how to create a runbook that will take an existing Kusto generated alert and reformat it into a human-readable format. This is done through leveraging Azure Monitor, alert rules, action groups, Azure Automation and PowerShell called through a webhook. The next blog post will explain how to use a Logic App to parse the JSON provided by the Azure Automation script provided in this blog post.
Solutions were a part of OMS and available within Log Analytics that provided pre-built visualizations for data in Log Analytics. These were moved into Azure Monitor a while back (Monitoring solutions in Azure Monitor – Azure Monitor | Microsoft Docs). Since that point in time, Microsoft has shifted focus towards Insights and Workbooks. It appears that solutions are on the way to being deprecated as part of this directional shift. There are a lot of good solutions that currently exist, so how do we move these solutions into a workbook?
It would be nice if there was a way to right-click on a workbook and export it or automatically convert it to a workbook. There is however no such functionality available now. This blog post shows the process that I used to convert an existing solution into a workbook using the “DNS Analytics (preview)” solution as an example. For this blog post, I am working from this assumption: if there is currently no data in a view that the view does not need to be translated into the new workbook.
Screenshots
Start with screenshots of each of the views currently available in the solution. While we may not be able to exactly match these visualizations, having pre-populated versions available for reference is invaluable when re-creating visualizations. Below are the screenshots from the DNS Analytics solution.
Overview tile:

Drilling into the overview tile:

Get to the queries
In the examples above you can drill into the underlying queries by using the “See all…” button or clicking on a specific record shown in the view. Below are some of the queries which were found drilling in this way with names added to the top of the query.
DNS security
DnsEvents
| where SubType == ‘LookupQuery’ and isnotempty(MaliciousIP)
| summarize Attempts = count() by ClientIP
Domains Queried
DnsEvents
| where SubType == ‘LookupQuery’
| summarize Count = count() by Name
DNS Clients
DnsEvents
| where SubType == ‘LookupQuery’
| summarize QueryCount = count() by ClientIP
| where QueryCount > 1000
Dynamic DNS Registration
DnsEvents
| where SubType == ‘DynamicRegistration’ and Result =~ ‘Failure’ and isnotempty(IPAddresses)
| summarize FailureCount = count() by Name, IPAddresses
Name Registration Queries
DnsEvents
| where SubType == ‘DynamicRegistration’
| extend failureCount = iif(Result == ‘Failure’, 1, 0)
| summarize sum(failureCount), totalCount = count() by ClientIP
List of DNS Servers
DnsInventory
| where SubType == ‘Server’
| project Computer, DomainName, ForestName, ServerIPs
List of DNS Zones
DnsInventory
| where SubType == ‘Zone’
| project ZoneName, DynamicUpdate, NameServers, DnsSecSigned
Unused Resource Records
DnsInventory
| where SubType == ‘ResourceRecord’ and ResourceRecordName !in ((DnsEvents
| where SubType == ‘LookupQuery’
| distinct Name))
DNS Servers Query Load
Perf
| where CounterName == ‘Total Query Received/sec’
| summarize AggregatedValue = count() by bin(TimeGenerated, 1h)
| render timechart
DNS Zones Query Load
Perf
| where ObjectName == ‘DDIZone’
| summarize AggregatedValue = count() by bin(TimeGenerated, 1h), CounterName
| render timechart
Configuration Events
DnsEvents
| where SubType == ‘ConfigurationChange’
| project EventId, TimeGenerated, Computer, TaskCategory, Message
DNS Analytical Log
DnsEvents
| where SubType == ‘LookupQuery’
Building the workbook
From the Log Analytics workspace, create a new workbook (“DNS Analytics” in this example). Add the query as shown below and run the query.

For each workbook section, use the name for the step name, and chart title and match the value for the no data message to what was shown in the solution (No data found in this example).

Lather, rinse and repeat for each of the queries identified above. For each visualization, steps may be required to configure a chart to represent the data and/or to configure specific ways to visualize that data.
As an example, I ended up tweaking one of the queries to show success and failures for DNS queries to this result.
DnsEvents
| where SubType == ‘DynamicRegistration’
| extend failureCount = iif(Result == ‘Failure’, 1, 0)
| extend successCount = iif(Result == ‘Success’, 1, 0)
| project Failure = failureCount, Success = successCount, ClientIP, bin(TimeGenerated,1h)
To generate this result:
Finishing up the workbook
After determining where the various workbook components need to go, what size to display them, and such we end up with a workbook such as the one shown below.



Pin to a dashboard
The various workbook visualizations can then be pinned onto an Azure dashboard as shown below:

Summary: While there is not an automated process to re-create solutions in workbooks and dashboards, the process used in the blog post was pretty straightforward and the results appear to be very similar to what was provided by the original solution.
Recently I was working with a customer who was interested in developing alert queries similar to those built-in with System Center Operations Manager (SCOM). We started with the development of a low disk space condition. For those not familiar with how SCOM handles alerting for low disk space conditions check out Kevin’s blog post: How Logical Disk free space monitoring works in SCOM – Kevin Holman’s Blog.
The query that I developed uses both % free space and free megabytes to alert when both conditions have reached the appropriate thresholds. This query was designed to work both as a warning alert and a critical alert depending on the values that you provide at the start of the query. For a critical alert I use these values:
let FreeMbMin = 0;
let FreeMbMax = 1000;
let FreePercentMin = 0;
let FreePercentMax = 5;
let Severity = “Critical”;
For a warning alert I use these values:
let FreeMbMin = 1000;
let FreeMbMax = 2000;
let FreePercentMin = 5;
let FreePercentMax = 10;
let Severity = “Warning”;
This approach is fully customizable as you can choose the thresholds for both conditions, or could even add another condition to test for (Critical, Warning, Proactive?)
The matrix I am using for free disk space health is shown below:

An example matrix of these conditions is shown below:

The query is available below:
let FreeMbMin = 0;
let FreeMbMax = 1000;
let FreePercentMin = 0;
let FreePercentMax = 5;
let Severity = “Critical”;
let LastCounterPercent = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘% Free Space’
and CounterValue >= FreePercentMin
and CounterValue < FreePercentMax
and InstanceName != “_Total”
| summarize TimeGenerated = max(TimeGenerated) by Computer, InstanceName;
let CounterValuePercent = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘% Free Space’
and CounterValue >= FreePercentMin
and CounterValue < FreePercentMax
and InstanceName != “_Total”
| summarize by Computer, InstanceName, CounterValue, TimeGenerated, _ResourceId
| extend summaryPercent = strcat(_ResourceId, Severity, ” Low Disk Space “, Computer, ” on disk “, InstanceName, ” value of “, toint(CounterValue), “% free disk space threshold is “,FreePercentMin, ” to “, FreePercentMax);
let LastCounterMb = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘Free Megabytes’
and CounterValue >= FreeMbMin
and CounterValue < FreeMbMax
and InstanceName != “_Total” and InstanceName !contains “HarddiskVolume”
| summarize TimeGenerated = max(TimeGenerated) by Computer, InstanceName;
let CounterValueMb = Perf
| where ObjectName == “LogicalDisk”
and CounterName == ‘Free Megabytes’
and CounterValue >= FreeMbMin
and CounterValue < FreeMbMax
and InstanceName != “_Total” and InstanceName !contains “HarddiskVolume”
| summarize by Computer, InstanceName, CounterValue, TimeGenerated, _ResourceId
| extend summaryMb = strcat(_ResourceId, Severity, ” Low Disk Space “, Computer, ” on disk “, InstanceName, ” value of “, toint(CounterValue), ” free Megabytes threhold is “, FreeMbMin, ” to “, FreeMbMax);
let CounterMv = CounterValueMb
| join LastCounterMb on TimeGenerated;
let CounterPercent = CounterValuePercent
| join LastCounterPercent on TimeGenerated;
CounterPercent | join CounterMv on Computer, InstanceName
| project Summary1=summaryMb, Summary2=summaryPercent, Computer, InstanceName, FreePercent = CounterValue, TimeGenerated, FreeMb = CounterValue1
Sample output shown below:
Summary1 | Summary2 | Computer | InstanceName | FreePercent | TimeGenerated [UTC] | FreeMb |
Critical Low Disk Space xyz.abc.com on disk C: value of 956 free Megabytes threshold is 0 to 1000 | Critical Low Disk Space xyz.abc.com on disk C: value of 2% free disk space threshold is 0 to 5 | xyz.abc.com | C: | 2.090264 | 7/19/2021, 7:48:06.300 PM | 956 |

One of the challenges that we have seen while working with Teams or other video conferencing platforms is a general slowdown of meeting audio when multiple videos are shared such as in a classroom environment or a large company meeting. During most of the day, this isn’t a problem, but as we hit certain points of the day (afternoon most often) there is a significant slowdown that can occur. This blog post will focus on debugging conference call latency issues in Microsoft Teams, but these issues will occur on any video conferencing software platform.
An important part of this to realize is that performance for video sharing can be impacted due to several underly causing conference call latency:
- A slow internet connection from any of the attendees (students)
- A slow internet connection from the presenter (teacher)
- A slowdown on the internet service providers
- A slowdown on the platform which is being used for these video calls (Teams in this example)
Additionally, it can also be a combination of any of the above which is occurring. To answer a simple question like “Why is my conference call latency poor, causing issues such as choppy audio, or problems seeing the screen share?” isn’t really that simple.
The graphic below shows a simplified version of how each of the attendees connects to a video conference. In most cases, there is someone running the meeting who we will refer to as a presenter (the teacher in a classroom setting). There are also several attendees (the students in the classroom setting). Each of these attendees is connecting to the internet through some manner (cable, fibre optic, ADSL, etc.) which are represented by the lines between the presenter and attendees to the internet. The connectivity is likely to different Internet Service Providers but we will simplify this to show that they are all connecting to the internet somehow. From their internet connection, they are each communicating with the video conference application (Teams in this blog post example).
Graphic 1: How people connect to a video conference when all is working well

There are a lot of parts that must work to make this whole process work. The presenter and attendees all need to have functional internet connectivity, and the video conference application must be functional and performing effectively. If any problems occur in the diagram below, there will be problems in the video conference. As an example, if one attendee has a slow internet connection, it will impact their ability to see the video conference (including what is shared on the screen, audio from the presenter, etc.). The slow link is shown in graphic 2 below by changing the color of the link between the attendee and the internet to yellow.
Graphic 2: How people connect to a video conference when one attendee has a slow internet connection

If the person who is presenting (or teaching) has a slow internet connection it will impact all the attendee’s (students) ability to see what they are sharing on the screen as well as audio and video from the presenter. This is represented in graphic 3 by the yellow line between the presenter and the internet.
Graphic 3: How people connect to a video conference when the presenter has a slow internet connection

If internet service providers are experiencing a slowdown (most likely due to additional network traffic occurring during this outbreak), this will impact all of the attendees of the video conference as shown in graphic 4.
Graphic 4: How people connect to a video conference when the internet service providers or connections are slow

Finally, if there is an issue with the underlying video-conferencing application this will also impact all attendees of the video, causing conference call latency issues as shown in graphic 5.
Graphic 5: How people connect to a video conference when the video conference application is slow

How to debug problems during video conferences
The above graphics should show that there are many different things which can cause a problem during a video conference. So how can we debug this situation?
- A good place to start is by asking your attendees if they are experiencing problems. Chat works well for this. If it is a single attendee (student) it is most likely an internet connection problem on their side.
- If multiple students are impacted, the next place to check is the presenter’s network.
- To check internet connectivity speeds, use a site that can determine your performance accessing the internet such as https://www.speedtest.net/, or https://fast.com/. These sites will let you know what your current connectivity speeds look like. Please note, you will want to know what is “normal” for your environment so check this before you have problems. Your speed will vary depending on whether you are hard-wired or on Wi-Fi and if you are on Wi-Fi it will vary depending on the signal strength of your Wi-Fi in the house. For an example of how to do this type of test in Teams refer to the “Configuring a Teams site to test connectivity” section of this blog post.
- If it’s not a connection or performance problem getting to the internet for the presenter and/or students, it may be the ISP. A good place to check for updates on ISP outages is here.
- Finally, there can also be a problem with the video conference application. For situations like this (referred to as a service incident), go to Service Health in the Microsoft 365 admin center (you will need to be an admin to get to this page). Additional information on how to determine if there are current service incidents is in “Configuring a Teams site to test connectivity” section of this blog post. And a process to send notifications if there are issues in Teams is documented in the “Configuring an email notification for service incidents” section of this blog post.
Common issues & resolutions:
- Issue: Is your screen sharing blurry or audio problems for multiple attendees?
- Workaround: Turn off video sharing for all attendees. If that doesn’t help turn off video sharing for the presenter. If it’s still blurry, you can use your cell phone to dial into the call to offload everything except for the screen sharing capabilities.
- Issue: Is your screen sharing blurry or audio problems for a single attendee?
- Workaround: This is most likely caused by a slow internet connection for the attendee. They can debug their internet connection to see what is wrong or dial into the call to offload everything except for their ability to see the screen share.
- Issue: No matter what I do the conference call is not working.
- Workaround: Try a different video conference call solution (Skype, Zoom, etc.) during the duration of the issue.confer
- Issue: How can I see each of the attendees on the call (or students)?
- Workaround: Microsoft Teams is currently limited to four videos at the same time, but you can pin specific attendees’ videos so that you can check in on each attendee (or student) if you need to.
Tips & Tricks:
- If your internet connection is slow, it may be overloaded
- Check with the other people in your location to see if they are doing tasks on the internet which are bandwidth-intensive and could be delayed (such as watching Netflix, downloading large files, etc.).
- Hardwire for your conference calls
- Hard-wired connections are preferable (especially for the presenter or teacher) but wireless is ok if that’s the only option available.
- Use a good audio device or a head-set
- If there are problems hearing you, use a good headset to help with clearing this up. Another option is a device like a Jabra puck.
- Configuring an email notification for service incidents
- See the “Configuring an email notification for service incidents” section of this blog post for details.
- Configuring a Teams site to test connectivity
- See the “Configuring a Teams site to test connectivity” section of this blog post for details.
Feedback from a colleague on this blog post
I sent this blog post to David B, who had the following thoughts for consideration (this has been consolidated to specific bullet points):
- Teams architecture details: In Teams, each person sends 1 media stream to Microsoft and gets a single media stream back from Microsoft. Microsoft mixes the video on their servers to show the last 4 who spoke and all the audio. Desktop sharing is also considered a video stream and goes with your video up to Microsoft in the same channel.
- Upload vs. download speeds: Performance issues are often the result of the upload speed as opposed to the download speed. Most people are using DSL lines with massive download pipes, but slower uploads. That can cause problems with what they are sending.
- Ports used by Teams: The other issue is that Microsoft Teams prefers to run on some UDP ports and will fall back on 443 if it cannot get the UDP ports. Check your home network to see if your router’s firewall is blocking those UDP ports as that would slow things down as well. Since many organizations haven’t gone and whitelisted the UDP ports for their firewalls, it seems very likely that home routers will either which could result in performance issues.
Configuring an email notification for service incidents
On the Microsoft 365 admin center under preferences, you can set up an email notification if there are health service issues for the services that you are interested in. To set this up open the Service health view and click on Preferences (highlight below).

If you enable the checkbox which says “Send me service health notifications in email” you can specify whether to include incidents (which we are looking for in this case).

You can choose what specific services you want to be notified about (Microsoft Teams and SharePoint Online in this example).

This notification should be sent to your technical contact at your organization or to the most technical person in your organization so they can determine if this incident will impact your organization.
Configuring a Teams site to test connectivity
You can create a Teams site which has different pages which will help with debugging connectivity issues. For this Teams site, you can add a webpage that points to one location to check your internet connection and a second webpage that checks your connectivity to Office 365. These provide a quick way to debug what could be causing conference call latency and communication issues.
To configure this, I created a new Team called “Teams Status”. On this team, I used the + sign to add a new tab.

I created two tabs, one called “Internet Connectivity Test” and one called “Teams Connectivity Test”. For each of these, I added them as a website from the options shown below.

For this new tab, you just need to type in the name of the website and add the URL you want it to go to.

Below are screenshots from my two websites that are available directly in Teams so it’s easier to track down what may be causing issues.
If you show more information, it gives more details which can help with debugging connectivity from your location. The URL I added was: https://fast.com/. In the example below, we can see that my internet speed is 32 Mbps, unloaded connections are at 14 ms and loaded connections are at 595 ms. Unloaded latency is how long it takes to connect when there is not much load through your link to the ISP. Loaded latency is how long it takes to connect when there is a load on the link to your ISP.

The Teams Connectivity Test checks the load time to bring up https://outlook.office365.com/. The URL I added was: https://tools.pingdom.com/#5c486c4d70400000. In the example below, we can see that the load time is 365 ms.

Additional reference:
- https://docs.microsoft.com/en-us/office365/enterprise/performance-tuning-using-baselines-and-history
Summary: Understanding how video conferencing systems work from a high level can help you to debug problems and work around them more quickly. Hopefully, this blog post has given you a quick crash course and has given some tips which will help to make your meetings (or classes) continue to go on without a hitch and avoid conference call latency!

Welcome to the “Introducing” series (check here for the full list of blog posts in this series). In the previous two blog posts, we introduced Azure and what services it provides, and then we introduced certifications for Azure and how to get started with Azure. In this blog post, we will introduce the structure of Azure.
Azure’s structure:
As a quick recap for Azure, it is a cloud-based computing service that provides IaaS, PaaS and SaaS solutions (see this article for details). Benefits to cloud computing are included in this previous blog post. Below is the structure which Azure uses and an explanation of the terms which are used to build out that structure.
- Tenant: A tenant represents an organization in Azure Active Directory – the method used to authenticate users in an organization. It contains domains, users, security groups and subscriptions. For more details, see this article on Microsoft Docs.
- Management Group: Used for grouping of subscriptions. For more details, see this article on Microsoft Docs.
- Subscription: A subscription defines the billing mechanism and provides a boundary for resources and resource groups. For more details, see this article on Microsoft Docs.
- Resource Group: Resource groups are used to logically group related resources such as storage accounts, virtual networks, and virtual machines (VMs). Group these together helps to deploy, manage, and maintain them together. For more details, see this article on Microsoft Docs.
- Resource: An item that is part of an Azure solution. Examples include databases or virtual machines. For more details, see this article on Microsoft Docs.
- Tags: Tags are used to identify resources. Tags are applied in name-value pairs which is assigned to resources or resource groups. For more details, see this article on Microsoft Docs.
- Datacenters: A datacenter is a facility where all of the cloud services available in Azure are physically located (this includes hundreds of thousands of computers in a datacenter and can be 20-30 football fields in size). For a good example of what an Azure datacenter looks like check out the video available here.
- Regions: A region is a group of datacenters within a specific latency-defined perimeter. As of 3/29/2020, there were 58 Azure regions worldwide.
- Countries: Azure datacenters are currently available in 140 countries as shown in the graphic below (screenshot gathered from here as of 3/29/2020).
Thank you to Chad S and Beth F for their help on this blog post!
Additional resources:
Series Navigation:
- Go back to the previous article in the series: Introducing Certifications for Azure and how to get started with Azure
- Continue to the next in this series: Introducing Azure Costing
As part of the process to simplify the logic of my Logic App, one of the steps was to start to convert existing Logic Apps into nested Logic App’s so that they can be called as a module which performs a specific purpose. In this case I am converting the Logic App which I wrote to query Nest into a nested Logic App. Below are the steps required:
First, we change the start of the existing Logic App, then we change the end of the existing Logic App, and then we call the new nested Logic App. Details on these steps are below:
Change the start of the existing Logic App:
Remove the existing start of the Logic App (scheduled execution in my example) and replace it with Manual – When an HTTP request is received”.

Change the end of the existing Logic App:
Then add Response as the end of the Logic App in this case passing back the body from the request which was done to the Nest API.

The updated version of the original Logic App is shown below:

Add the new nested Logic App:
Once it has saved it will generate the URL which will be specific to the nested Logic App. After this the new nested Logic App then appears as an option for Azure Logic Apps. In the example below I created a new Logic App which was now able to call an action to call the nested Logic App as shown below:

Reference links:
- This is shown from a high level in this TechNet article: https://social.technet.microsoft.com/wiki/contents/articles/34129.azure-logic-apps-call-nested-logic-apps-directly-from-logic-apps-designer.aspx
Summary: To convert an existing Logic App into a Nested logic app – just remove the scheduled execution and replace it with “Manual – When an HTP request is received”, add Response as the end of the Logic App and save the updated Logic App.
This blog post series will cover two approaches which can be used to help to customize how alerts are formatted when they come from Azure Monitor for Log Analytics queries. For this first blog post we will take a simple approach to making these alerts more useful – cleaning up the underlying query.
In Azure Monitor, you can define alerts based on a query from Log Analytics. For details on this process to add an alert see this blog post. This blog post will focus on how you can clean up the query results to make a cleaner alert.
Cleaning up query results
For today’s blog item we’ll start with what I would have used normally as a query for an alert:
Perf
| where (ObjectName == “Processor”
or ObjectName == “System”) and CounterName == “% Processor Time”
| where CounterValue > 40
| sort
by TimeGenerated desc
The query provides all fields because we aren’t restricting what to return.

This will generate the alert as expected, and here’s the resulting email of that alert.

One of the Microsoft folks pointed out to me that if I cleaned up my query it would clean up the results and therefore it would clean up the email which is being sent out (thank you Oleg!). We can take a first stab at cleaning this up by just restricting which fields we are returning with a project statement:
Perf
| where (ObjectName == "Processor"
or ObjectName == "System") and CounterName == "% Processor Time"
| where CounterValue > 40
| project Computer, CounterValue, TimeGenerated
Here’s the resulting email of the new alert:

For queries which are run directly in the portal, we can clean this up further by adding some extends which provide information on our various fields.
Perf
| where (ObjectName == "Processor"
or ObjectName == "System") and CounterName == "% Processor Time"
| where CounterValue > 40
| extend ComputerText = "Computer Name"
| extend CounterValueText = "% Processor Utilization"
| extend TimeGeneratedText = "Time Generated"
| project ComputerText, Computer, CounterValueText, CounterValue, TimeGeneratedText, TimeGenerated
A sample result is below:

This is the query that we will use for the actual email alert, but we’ll showcase one more example in case it’s helpful. We can even move this to more of a sentence format for alerts such as this:
Perf
| where (ObjectName == “Processor”
or ObjectName == “System”) and CounterName == “% Processor Time”
| where CounterValue > 40
| extend CounterValueText = “% Processor Utilization”
| extend Text1 = ” had a high “
| extend Text2 = ” of “
| extend Text3 = ” at “
| extend Text4 = “.”
| project Computer, Text1, CounterValueText, Text2, CounterValue, Text3, TimeGenerated, Text4
A sample result for this query is below:

If we compare the original alert to the new alert side by side shows how much this one simple change can make to clean up and make your alerts more useful: (Original on left, new on right)


The alert on the right-hand side helps to remove the clutter by decreasing the number of fields shown in the results section of the email. It also shortens the email by about a third making it easier to find what you are looking for as you can see based on the examples above.
Summary
If you want to make your current alerts more useful, use a project command to restrict the fields which are sent in the email (IE: Clean up the Log Analytics query). This is quick to put in place and results in a much more readable email alert.
P.S. The email above on the right is, however, a long way from my optimal email format, shown below:

The above approach to alerting from Log Analytics we will cover in the next blog post in this series! Would you like to know more? Get in touch with us here.