Comparing Azure vulnerability scanning solutions | Quisitive

Microsoft Defender for Cloud provides recommendations on various items related to Azure resources (and on-prem resources via ARC). For example, one of the recommendation types focused on vulnerabilities that may exist on virtual machines (VMs). Microsoft provides two built-in vulnerability assessment solutions for VMs.

One is “Microsoft Defender vulnerability management,” and the other is the “integrated vulnerability scanner powered by Qualys” (referred to from here forward as “Qualys”). Microsoft includes both solutions as part of Microsoft Defender for Servers. In addition, Microsoft has made “Microsoft Defender vulnerability management” (referred to from here forward as “Default”) the default vulnerability scanner. These two options are shown below in Figure 1.

Figure 1 : Vulnerability assessment solutions currently available

My recommendation?

I recommend using the Qualys scanner instead of the Default vulnerability scanner. This is because the Qualys scanner looks for more vulnerabilities, resulting in a more complete result.

If you want to go further into the weeds from what I found, feel free to continue reading through the functional comparison, FAQ, and Reference Links sections below.

Functional comparison:

FAQ’s:

Figure 2 : Two machines with one onboarded to each vulnerability scanner.

Reference links:

Qualys usage is included per this article: Defender for Cloud’s integrated vulnerability assessment solution for Azure, hybrid, and multicloud machines | Microsoft Learn

So, what is your experience with these options? Do you have any insights that you can provide? Please feel free to reach out to me with them on LinkedIn or Twitter!

In this blog post, we will disassemble the alert structure from Azure for metrics and logs, compare what is included in the alert, and point out challenges in the alert functionality currently available in Azure. In the first part of this blog series, we introduced the new dynamic threshold functionality available in Azure’s monitor.

So, what do the alerts look like?

The answer is that it varies widely based on what type of an alert it is (metric or log analytics based).

Metric-based alert format:

Below is a sample alert format based on what I have seen for this CPU alert for a metric type of alert.

Subject format:

Subject samples:

Body sample: (items in bold below are fields that do not exist in Log Analytics based alerts and are moved to the bottom for readability)

Microsoft has effectively provided all of the potentially relevant content in the email. This is logical as these emails may be sent to ticketing systems, so any relevant fields should be included.

For a metric, the information is pretty much there for what you need to know about an alert condition.

From a usability perspective, the content of the alert has the relevant information, including a link to the alert in Azure monitor, a link to the alert rule, the metric’s value, and how many violations have occurred versus how many periods the alert was examined over.

Log Analytics-based alert format:

Now let’s look at what happens if you decide to use Log Analytics as your data source for an alert. Below is a sample alert format based on what I have seen for this CPU alert for a log analytics type of alert. The subject format is the same as shown below.

Subject format:

Subject samples:

Body sample: (items below in bold are fields that do not exist for Metric based alerts and are moved to the bottom for readability)

The additional fields are related to the alert description, how query results are split, and include links to search results and the query run as part of the alert. The alert does not know what the query is checking for; it just knows that the query is being run and the result sent back from the query.

NOTE: As an example, for a high CPU condition, the alert may not know how high the CPU is, how long it has been that high, and other relevant pieces of information.

Challenges with Azure monitor alerts:

Alert readability: The first big challenge with these alerts is the volume of information that is contained in the alert. The sheer amount of data makes understanding what is in the alert very challenging. In a later blog post, I will show a simplified version of alerting focused on only providing the required information.

Alert Customization: Customization of alerts is not available for either metrics or logs. IE: You cannot suppress specific fields from an alert, add fields to an alert or change the default structure of an alert.

Some excellent data is available to be queried via Kusto using AzureDiagnostics. For example, the ResultDescription field has almost all of the relevant information, but it’s in a format that needs to be parsed to grab specific fields from the column. Below is a sample query to parse the particular fields within this column. We start by identifying the records we need to split out (in our case, those that begin with “Computer”). And then, we project the two required fields (TimeGenerated and ResultDescription). Then we do a parse for the specific pieces of the column that we need to break out.

AzureDiagnostics

| where Category == “JobStreams” and ResultDescription startswith “Computer”

| sort by TimeGenerated

| project TimeGenerated, ResultDescription

| parse-where ResultDescription with * “Computer    :” Computer “\n” *

| parse-where ResultDescription with * “Category    :” Category “\n” *

| parse-where ResultDescription with * “TestGroup   :” TestGroup “\n” *

| parse-where ResultDescription with * “TestName    :” TestName “\n” *

| parse-where ResultDescription with * “Status      :” Status “\n” *

| parse-where ResultDescription with * “Description :” Description “\n” *

| parse-where ResultDescription with * “Message     :” Message “\n” *

| parse-where ResultDescription with * “RunTime     :” RunTime “\n” *

| project TimeGenerated,Computer,Category,TestGroup,TestName,Status,Message,Description,RunTime

Summary: If you need to go through a field that contains multiple values, try out the parse-where functionality! I owe a huge thank you to David Stein who wrote this query. You rock dude!

Warning, this blog post is going way into the weeds of Kusto and Log Analytics.

Let’s say for a minute that you wanted to call http_request_post to post some data to an API. The query below is a two part query. The first part identifies any anomalies in usage data (a good query to check out).

let min_t = toscalar(now(-1d));

let max_t = toscalar(now());

let content = Usage

| make-series statuschanges=count() default=0 on todatetime(TimeGenerated) from min_t to max_t step 1h by Type

| extend (flag_adx, score_adx, baseline_adx)=series_decompose_anomalies(statuschanges, 1.5, -1, 'linefit')

| project timestamp=todatetime(TimeGenerated[-1]), anomaly_score=score_adx[-1], flag=flag_adx[-1];

The second half of this would (in theory) write the content gathered in the first section via a http_request_post. However, it does not work.

Please note: I am using http://www.bing.com just to show a URL, not to actually perform the http_request_post to send the data there.

let uri = "http://www.bing.com";

let headers = dynamic({});

let options = dynamic({});

let content_dummy = "some content here";

let content2 = tostring(toscalar(content));

evaluate http_request_post(uri, headers, options, content2);

The query appears to run but actually it is not working. The issue shown below is identified by the red underline shown below on “content2”.

Kusto with content2 underlined
Kusto with content2 underlined

If you hover over the red underlined section, you see the real hint as to what’s wrong. “Error: The expression must be a constant. content2: string”

Kusto error expression must be constant
Kusto error expression must be constant

What this is telling us is that the http_request_post cannot be used unless the third parameter is a constant. We can verify this by trying the hardcoded version of this query using the “content_dummy”.  Notice the lack of the red underline shown below:

content_dummy with no error
content_dummy with no error

This puts us in a less than optimal situation as http_request_post doesn’t let us write data that isn’t hardcoded. The work-around (thank you to Matt Dowst) is to wrap the http_request_post into a function as shown below.

let min_t = toscalar(now(-1d));

let max_t = toscalar(now());

let content = Usage

| make-series statuschanges=count() default=0 on todatetime(TimeGenerated) from min_t to max_t step 1h by Type

| extend (flag_adx, score_adx, baseline_adx)=series_decompose_anomalies(statuschanges, 1.5, -1, 'linefit')

| project timestamp=todatetime(TimeGenerated[-1]), anomaly_score=score_adx[-1], flag=flag_adx[-1];

let uri = "http://www.bing.com";

let headers = dynamic({});

let options = dynamic({});

let content_dummy = "some content here";

let content2 = tostring(toscalar(content));

let request = (uri:string, headers:dynamic, options:dynamic, json:string){

evaluate http_request_post(uri, headers, options, json);

};

request(uri, headers, options, content2)

In the bolded section above, we have created a function that does the same thing we were trying to do but it runs the http_request_post in a function. If we re-hover over content2 we can now see it is no longer underlined.

No error on content2
No error on content2

Summary: Are you running into “Error: The expression must be a constant” in Kusto? Try wrapping whatever you were trying to run in a function and then calling that function. I owe a huge shout-out to Matt Dowst who provided the way to make this work.

Recently I was working with creating alerts from metrics such as CPU in Azure Monitor. I have spent a lot of time working on Alerts from Log Analytics as a source, but not from metrics. When defining CPU alerts (or any other type of alert) for metrics, you can choose either a static threshold or a dynamic threshold (as shown below).

Figure 1: Dynamic thresholds for metrics

Dynamic thresholds in Azure
Dynamic thresholds

Operator options: Greater or Less than, Greater than, Less than

Aggregation type: Average, Maximum, Minimum, Total, Count

Threshold sensitivity: High, Medium, Low

The alert creation process lets you preview the results of this or any other type of alert. For example, the figure below shows the boundary range for where alerts would and would not generate (an alert would be expected at about 10:30 am in the example below).

Figure 2: Configuring signal logic for a dynamic alert

Dynamic alert threshold ranges
Dynamic alert threshold ranges

So far, based on my experiences, this functionality works well. I have only had one alert up to this point on the various systems that are being watched via this CPU alert using dynamic thresholds. But, I will admit, I had my concerns….

<FlashBack Start>

The year is now 2010. System Center Operations Manager (SCOM) is making waves in the monitoring industry and includes some exciting technology called “self-tuning-thresholds.” The idea behind this functionality is that SCOM could monitor a performance counter over time and identify what the normal range for that counter is. These were a challenge to work with as the logic behind the math used was not well explained initially.

Figure 3: Flashback to Self-Tuning Thresholds (STT)’s in SCOM

SCOM 2012 STT
SCOM flashback

Did they work? The community consensus was that these alerts were often very noisy, and some metrics were not a good choice for a self-tuning threshold (such as a value that was consistently at 0).

<FlashBack End>

Summary: Dynamic thresholds in Azure’s monitor appear to be an excellent way to identify changes in behavior for various metrics. They do not appear to be too noisy and can be tuned by altering the threshold sensitivity and operator options.

In the next blog post we will disassemble the Azure alert email format.

Welcome to the “Introducing” series.

In the previous blog post, we introduced the Azure Marketplace. In this blog post, we Azure Resource Manager (also known as ARM). We have a special guest author for this blog post, Steve Buchanan! Steve is a Microsoft Azure MVP and a great contributor to the technical community.

To find out more about Steve, check out his blog at www.buchatech.com or on Twitter as @Buchatech! With no further delay, let’s learn about ARM!

By this point you should be familiar with Microsoft’s public cloud service Azure (which we introduced in this blog post in the series). In this blog post we are going to explore the engine of Azure “Azure Resource Manager” aka “ARM”. As the engine for Azure ARM is core to Azure being the deployment and management service for Azure. What do we mean by deployment and management service? This is what allows you to create, update, and delete resources in Azure.

Relevant Azure Structure

Within ARM you need to be familiar with the following: (for more on the structure of Azure see this blog post in the series)

Resource

Item in Azure that needs to be managed. This is created or assigned. Examples of a resource is Virtual Machines, Load Balancers, a Virtual Network, Storage Account, IP Address and more.

Resource group

This is a container that holds resources. You place resources that share the same lifecycle in the same resource group so you can manage them together.

Resource provider

A service in Azure that provides various Azure services and their resources. Every service in Azure has a resource provider for example you have Microsoft.Network for virtual networks, load balancers etc, Microsoft.Kubernetes for AKS, Microsoft.Compute for virtual machines, and Microsoft.Storage for storage resources. A full list of the Azure resource providers can be found here: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-services-resource-providers

How to interact with Azure ARM

Next it is important to understand how you can interact with Azure ARM. There are many ways to interact with ARM these are:

Azure Portal

This is https://portal.azure.com. It is a web UI that you log into to interact with Azure.

ARM API’s

The ARM API is how you can access a number of REST operation groups to interact with ARM. Full list of the REST operations can be found here: https://docs.microsoft.com/en-us/rest/api/resources/

ARM SDK’s

The ARM SDK is used to programmatically interact with Azure. You can access the SDK downloads for many languages here: https://azure.microsoft.com/en-us/downloads/

Azure PowerShell Module

The Az PowerShell module allows you to work with Azure directly from PowerShell. The Az PowerShell module has a set of cmdlets for working with Azure resources. You can learn more about this here: https://docs.microsoft.com/en-us/powershell/azure/new-azureps-module-az?view=azps-5.9.0

Azure CLI

The Azure CLI is Azure’s official Command-Line Interface. The CLI is a set of commands used to work with Azure resources. To learn more about the CLI visit: https://docs.microsoft.com/en-us/cli/azure/

ARM Templates

ARM Templates are used for infrastructure as code (iaC) with Azure. ARM Templates are JavaScript Object Notation (JSON) files that define your Azure infrastructure and configuration via declarative syntax. Learn more about ARM Templates here: https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview

Bicep

Bicep is a domain-specific language (DSL) language for writing IaC for Azure. Bicep is an abstraction to ARM Templates being an easier language to work with compared to ARM Templates that are based on JASON. Bicep files compile into ARM Templates and then are deployed to automate Azure. To learn more about Bicep visit: https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/bicep-overview

Azure Bicep, ARM Templates, the Az PowerShell module, and Azure CLI are often used for automation with Azure. Regardless of what tool and way you chose to work with ARM the key is that the experience and result is going to be consistent.

What is ARM used for?

ARM was designed for efficient resource organization and management. With ARM you have the following hierarchical structure available to help you organize from the tenant level down to the resource groups:

Management Groups

|

Subscriptions

|

Resource Groups

|

Resources

You can apply settings and policies at each level for configuration and compliance needs. You also can add tags to resources for labeling the resources and locks to lock resources from change or accidental deletion.

ARM was built to be resilient never down for maintenance, not dependent on any single Azure data center, and it is distributed across regions and availability zones for continuous availability.

ARM was built to be secure allowing you to control who sees what resources and perform actions on those resources. ARM is controlled through Role-Based Access Control (RBAC) powered by Azure Active Directory giving you confidence in security on a group or user-based level.

As you can see to truly understand Azure it is important to understand ARM. ARM is central to Azure. As you work with Azure no matter what services you are using, how you are connecting to Azure, and what you are doing you are essentially interacting with ARM. Thanks for reading and I hope this blog post was insightful giving you an introduction to Azure Resource Manager.

Thank you to Steve for your contribution to the Introducing series!

Series Navigation:

Welcome to the “Introducing” series. In the previous blog post, we introduced Azure Sentinel. In this blog post, we will introduce the Azure Marketplace.

What is the Azure Marketplace?

The Azure Marketplace provides a way to find, try out and purchase applications and services that run in the Azure cloud. The solutions in the marketplace are simple to deploy and integrate with the existing billing in place for your Azure subscription(s). The marketplace provides more than 17,000 certified applications and services, that are split into a variety of categories (listed below):

Microsoft and other vendors provide additions that are available in the Marketplace.

Installing solutions

In most cases, the implementation of solutions in the Marketplace is very straightforward. To install a solution, click on it and then go to Create (see the Translator example below).

There may be some configuration required next and when that is completed go to Review and create and then choose to create the resource.

The solution then deploys (you can check the status on the deployment by going to Notifications – shown below).

Once the deployment is complete you can easily go to the resource that was created.

Azure Private marketplace: The private marketplace provides a way to choose which applications and services are available for deployment within your organization. You can also create applications or services that are made available only to your organization (or your customers) using the Azure Private marketplace. The private marketplace also allows administrators to select which third-party solutions their company will sanction and allow users to use. Think of it as a company store where you choose what software members of your company can purchase or use.

How are items in the Azure Marketplace priced?

The marketplace provides solutions which are free, free to try, pay as you go, or where you bring your own license.

Thank you to Tony N for his input on this blog post!

Additional Reference material:

Series Navigation:

Welcome to the “Introducing” series. In the previous blog post, we introduced workbooks. In this blog post, we will introduce a “low code” or “no-code” option for developed called Logic Apps.

What does “low-code” or “no-code” mean?

Traditional development is done by writing software (IE code) to perform the actions that you want to take. As an example, the browser you are currently using to access this website is an application that was written with code. The concepts of “low-code” or “no-code” provide a method to develop an application without having to write code through using a graphical user interface and providing configurations within the user interface. This approach is often embraced by people who are not developers, such as an IT professional like me.

What is Azure Logic Apps & what are common uses for it?

Azure Logic Apps is an automation platform that provides a graphical user interface and pre-built components (called connectors) that you can use to build automation, in most cases without having to write code.

We utilize Logic Apps to automate the delivery of scheduled reports, to gather data from different sources and act upon that data, or to perform automation that occurs when a specific situation is identified with other Microsoft solutions such as Azure Sentinel.

The graphic below shows a simple example of Logic App which on a scheduled basis queries information from a data source (Log Analytics in this case), and then sends an email with the relevant information. We utilize Logic Apps on our managed automation team to provide delivery of scheduled reports to our customers.

Logic Apps can also be used for more complex tasks such as gathering data from an external Application Programming Interface (API). The graphic below shows a completed Logic App which queries to gather data from an API. In the example below this is querying to gather current weather data from the OpenWeather API. This could be used for any type of API.

What are connectors?

Connectors are pre-built components that can be used to assemble automation. They are triggered in some way (such as a scheduled to run on a specific recurrence, or when a web request is made), and they act after they are triggered. There are a variety of connectors for Logic Apps which are documented here and here.

How are Logic Apps priced?

Logic Apps pricing is based on the number of actions that you perform and the connectors that you use within your automation. Details on pricing for Logic Apps is available here.

If you have ever seen Power Automate or Microsoft Flow, Logic Apps may look very familiar. This is logical because Flow is built on top of Logic Apps, they both share the same designer user interface, and they can share connectors. If you have worked with Flow you should find Logic Apps very simple to work with.

Additional Reference material:

Series Navigation:

Welcome to the “Introducing” series. In the previous blog post, we introduced the query language Kusto. In this blog post, we will introduce another solution within Azure called Monitor (or Azure Monitor).

What is Azure Monitor?

Logically enough, Azure Monitor is used primarily to monitor resources that exist in Azure. There are exceptions to this via technology such as Azure ARC but in general, Monitor is to provide monitoring and alerting for Azure resources.

What is Azure Monitor used for?

Monitor & Visualize Metrics: We introduced metrics in the Log Analytics portion of this blog series. For compute resources an example of a metric in Azure monitor would be Percentage CPU. The graphic below shows how metrics can be easily visualized through Azure Monitor.

Metrics are not limited to compute type resources. Resources that exist in Azure either already provide (or likely soon will provide) metric information specific to their services. The graphic below shows how CPU percentage can be shown for non-Compute of resources (a SQL database in this case).

Insights: Insights provide pre-built visualizations for various services in Azure. Examples of these pre-built insights are shown below.

If we look into the Virtual Machines insights we see common metrics which we would expect for compute-based resources (CPU, memory, bytes sent/received, disk space used – shown in the graphic below).

Query & Analyze Logs: The ability to query and analyze logs in Azure Monitor brings us into Log Analytics via the query experience using Kusto. For details see previous blog posts in this series on these topics.

Alerts & Actions: Azure Monitor can generate alerts for conditions that it finds for resources that existing within your IT organization’s tenant. This is done by selecting the type of resource that you want to alert on, and the location of the resource.

The alerts pane shows a quick summary view of recently generated alerts (shown below) which can be used to drill into details on specific alerts. Azure Monitor also lets you define actions that occur when your alert fires – such as sending an email or calling a webhook.

Overall, Azure Monitor provides an effective method to provide monitoring and alerts for the resources which are available in your Azure subscription.

How is Azure Monitor priced?

Azure Monitor pricing is based on a variety of usage-related metrics (platform logs, metrics, health monitoring, alert rules, notifications, SMS, and Voice calls). For details on pricing check out the pricing calculator at Pricing – Azure Monitor | Microsoft Azure.

Series Navigation:

Welcome to the “Introducing” series. In the previous blog post, we introduced Log Analytics. In this blog post, we will introduce the query language used in a variety of areas including Log Analytics (introduced in the previous blog post).

What is Kusto?

Kusto or KQL (the Kusto Query Language) is a language that is used to process data and return results. It is an extremely powerful query language that can be used to perform complex queries on data stored in a variety of sources including Log Analytics.

Key pieces of Kusto:

  1. Queries start with the table that information that the data is stored in. As an example, a query can be as simple as “Usage”. That query shows the usage records for data sent into Log Analytics.
  2. Queries are extended by using a pipe (|). Example: “Usage | where TimeGenerated > now(-2hours)” shows any Usage type data written in the last 2 hours. Pipes can continue to be added such as here: “Usage | where TimeGenerated > now(-2hours) | project DataType, Quantity” which shows specific fields from the query.
  3. Project is extremely useful when you want to choose specific fields to show. You can also use project-away to remove specific fields from being shown.
  4. “let” can be used to define a variable. As an example, this takes the results of the previous query and defines it as a variable called RecentUsage. Note the ; at the end to indicate the completion of a query.
    let RecentUsage = Usage | where TimeGenerated > now(-2hours) | project DataType, Quantity;
    RecentUsage
  5. “Sort” makes it easy to order how your data is shown. Example:
let RecentUsage = Usage | where TimeGenerated > now(-2hours) | project DataType, Quantity | sort by Quantity desc;

RecentUsage

The Kusto language reference has proven to be invaluable to me, I highly recommend it and I use the search functionality on this site regularly to find specific types of commands in Kusto and how to use them.

A quick history of Kusto

Work on Kusto started at Microsoft in 2013. In 2015, KQL was released to the world as part of Application Insights. In 2017, Log Analytics was ported to Kusto/ADX (Azure Data Explorer). Details on the timeline and how it interacts with OMS, Log Analytics and Application Insights are below:

Where is Kusto used?

Kusto is used in a variety of places in Azure and even outside of Azure. Areas that I am currently aware of that it is used include:

How to use Kusto to get data out of Log Analytics

Queries that you run in Kusto can easily have their data exported by choosing the “Export” option shown below. Data can be exported via CSV (comma-separated values) or as an M Query. CSV files are often used when exporting data to work with it in Microsoft Excel (part of the solutions available in the Microsoft 365 cloud).

M Queries are used by applications such as PowerBI to provide a method to integrate data stored in sources such as Log Analytics.

You can also query Log Analytics workspaces using Kusto to gather data and use it in automation solutions such as Flow or LogicApps.

How to use Kusto to act upon data in Log Analytics

Once there is data in the Log Analytics workspace, you can use Kusto queries to take action upon the results that come back from those queries. Within Azure Monitor, you can create alerts that provide notifications when specific conditions occur based on the data that you have collected. To take the example from the Log Analytics blog post in this series, we could generate an alert when a group of CPUs is being over-utilized over some time to indicate that we should consider adding more compute resources. Alert rules can also be used to perform actions such as calling a webhook, which in turn can be used to perform an automated action. To bring this all together, if the group of CPUs that are providing a web application is over-taxed for over an hour, we could use an alert rule in Azure Monitor to call a webhook to automation to add more compute resources. Within our managed automation team use a similar approach to this to reformat the alerts into a structure that works better with our ticketing system.

Additional Reference material:

Thank you to Oleg Ananiev for his information on the history of Kusto!

Series Navigation: