Greene Tweed Develops and Deploys Smart Factory with Quisitive | Quisitive
Greene Tweed Develops and Deploys Smart Factory with Quisitive
Watch the video to learn more about how Quisitive helped Greene Tween build their Smart Factory and improve safety efficiency, and productivity in their factories.
Green Tweed Smart Factory Case Study Feature Image

Greene Tweed is a specialty engineering and manufacturing company that engaged Quisitive to execute a specific vision: a Smart Factory solution.

 

Learn how Quisitive built the Azure infrastructure to enable their vision with a solution that spans from source through reporting. With this new solution, Greene Tweed can now leverage Azure Data Factory to make more informed decisions based on real-time data and predictive modeling.

Other topics in this video include:

AI, Big Data & Machine Learning
Data Security
Data System Consolidation
Predictive Modeling
Real-time Insights

Watch the video to learn more about how Quisitive helped Greene Tween build their Smart Factory and improve safety efficiency, and productivity in their factories.


See more of our work with Greene Tweed in our other video, Quisitive Builds Modern Data Platform for Greene Tweed. Watch Now >

 

Learn more about how Quisitive can help.
Fill out this quick form and our team will reach out shortly.

Hidden
Hidden
This field is for validation purposes and should be left unchanged.
Discover 5 Reasons to Migrate SQL Servers to Azure
March 7, 2023
Get the stats. Find out why many organizations have made the decision to migrate Windows and SQL servers to Azure.
Simplify your migration from Windows and SQL Servers to Microsoft Azure with Quisitive.

Cloud adoption is on the rise as businesses today face market and supply chain disruptions unlike any they’ve faced in the past and turn to the cloud for the scale, flexibility, and security they need to keep up.

Quisitive will…

Assess your current environment to define in-scope workloads
Plan your migration, ensuring you receive all the benefits of migration
Build your Azure infrastructure to support your workloads
Migrate identified workloads to Azure
Optimize your environment for efficiency, security, and costs
Access the infographic
Fill out the form and get instant access to the top 5 reasons to migrate SQL servers as Azure.

Hidden
Hidden
This field is for validation purposes and should be left unchanged.
Save with your existing Windows Server or SQL Server Licenses

We meet you where you are on your cloud journey. Even on-premises workloads can benefit by extending capabilities using Azure services.

With the Azure Hybrid Benefit, you can use an existing Windows Server or SQL Server license to save on virtual machines in Azure SQL Servers.

Contact us to learn more about cost savings opportunities when you migrate Windows and SQL Servers to Azure.

Learn more
Benefits of Migrating to Microsoft Azure

Your ideal cloud infrastructure should earn your trust with resilience, scalability, and cost efficiency. Migrate SQL Servers to Azure SQL servers or migrate Windows Servers to Azure to realize the benefits of the cloud.

Reduced Costs

Receive up to 478% 3-Year ROI when you migrate from Windows and SQL Servers to Azure.


Innovation

Accelerate the modernization of your infrastructure and applications with the power of cloud computing on Azure’s scalable and resilient platform.


Hybrid Capabilities

Use a combination of cloud and on-premises services where needed to maximize agility and value.


Security

Take advantage of multi-layered security across physical data centers, infrastructure, and operations, as well as Disaster Recovery.


About Quisitive

Quisitive is a premier, global Microsoft Partner that harnesses the Microsoft cloud platform and complementary technologies, including custom solutions and first-party offerings, to generate transformational impact for enterprise customers. Quisitive has consistently been recognized as a leading Microsoft Partner with 16 Specializations and all 6 Solution Partner Designations. Quisitive’s Microsoft awards include the 2023 US Partner of the Year Winner for Health and Life Sciences, 2023 US Partner of the Year Winner for Solutions Assessment and 2023 US Partner of the Year Finalist for the Industrial and Manufacturing vertical.

quisitive-logo

Microsoft Defender for Cloud provides recommendations on various items related to Azure resources (and on-prem resources via ARC). For example, one of the recommendation types focused on vulnerabilities that may exist on virtual machines (VMs). Microsoft provides two built-in vulnerability assessment solutions for VMs.

One is “Microsoft Defender vulnerability management,” and the other is the “integrated vulnerability scanner powered by Qualys” (referred to from here forward as “Qualys”). Microsoft includes both solutions as part of Microsoft Defender for Servers. In addition, Microsoft has made “Microsoft Defender vulnerability management” (referred to from here forward as “Default”) the default vulnerability scanner. These two options are shown below in Figure 1.

Figure 1 : Vulnerability assessment solutions currently available

My recommendation?

I recommend using the Qualys scanner instead of the Default vulnerability scanner. This is because the Qualys scanner looks for more vulnerabilities, resulting in a more complete result.

If you want to go further into the weeds from what I found, feel free to continue reading through the functional comparison, FAQ, and Reference Links sections below.

Functional comparison:

FAQ’s:

Figure 2 : Two machines with one onboarded to each vulnerability scanner.

Reference links:

Qualys usage is included per this article: Defender for Cloud’s integrated vulnerability assessment solution for Azure, hybrid, and multicloud machines | Microsoft Learn

So, what is your experience with these options? Do you have any insights that you can provide? Please feel free to reach out to me with them on LinkedIn or Twitter!

Header Image: eBook: Successful M&A Integration
Your Guide to Successful Integration for Mergers and Acquisitions
Critical Components of Successful Mergers, Acquisitions and Divestitures
M&A Integration: Critical Components of Successful Mergers, Acquisitions and Divestitures

When an organization undergoes a merger, acquisition or divestiture, there are many technological elements that need to be considered to ensure everyone involved can work together — or separately — effectively without sacrificing necessary systems or tooling.

 

Oftentimes, this means more than migrating mailboxes. The organization must consider other M&A integration elements such as its Azure tenant, data platforms, applications, security, ERP systems, and more.

 

Read this ebook and discover the Critical Components of a Successful Merger and the technology integration “Must Dos”. 

 

Learn more about our M&A services.
If you would like to talk to an M&A Technical Consultant fill out the form here. You do not have to fill out this form to download the eBook.

Hidden
Hidden
This field is for validation purposes and should be left unchanged.

About Quisitive

Quisitive is a premier, global Microsoft Partner that harnesses the Microsoft cloud platform and complementary technologies, including custom solutions and first-party offerings, to generate transformational impact for enterprise customers. Quisitive has consistently been recognized as a leading Microsoft Partner with 16 Specializations and all 6 Solution Partner Designations. Quisitive’s Microsoft awards include the 2023 US Partner of the Year Winner for Health and Life Sciences, 2023 US Partner of the Year Winner for Solutions Assessment and 2023 US Partner of the Year Finalist for the Industrial and Manufacturing vertical.

quisitive-logo
Top 5 Reasons to Migrate to the Cloud Header
Infographic: Top 5 Reasons to Migrate to the Cloud
Understand why so many companies have made the move to the cloud with our infographic, Top 5 Reasons to Migrate to the Cloud.
Top 5 Reasons to Migrate to the Cloud - Feature Image
View the full PDF to understand why so many companies have made the move to the cloud.
Talk to a cloud expert
Fill out the form and our team will reach out to you shortly.

Hidden
Hidden
This field is for validation purposes and should be left unchanged.

About Quisitive

Quisitive is a premier, global Microsoft Partner that harnesses the Microsoft cloud platform and complementary technologies, including custom solutions and first-party offerings, to generate transformational impact for enterprise customers. Quisitive has consistently been recognized as a leading Microsoft Partner with 16 Specializations and all 6 Solution Partner Designations. Quisitive’s Microsoft awards include the 2023 US Partner of the Year Winner for Health and Life Sciences, 2023 US Partner of the Year Winner for Solutions Assessment and 2023 US Partner of the Year Finalist for the Industrial and Manufacturing vertical.

quisitive-logo

This blog post will show the next step in how to create a custom alert format using a combination of Kusto and Azure Automation. In the first part of this blog post series, I introduced the steps I am utilizing to provide what I refer to as a “human readable alert”. This blog post series is to provide another option to the default functionality available in Azure Monitor. Relevant blog posts:

As a reminder, the solution we are using is built from four major components: a custom Kusto query, alert notification, Azure Automation, and LogicApps (or Azure Automation with SendGrid). To get to the script below, we are using Azure Monitor with a Kusto query that uses an action group to send a webhook to a runbook in Azure Automation. This blog post will show the process to create the Azure Automation script, how to create the webhook and how to integrate the webhook into an action group.

Creating the Azure Automation script:

The script below takes an incoming webhook, parses it for relevant information, formats it to JSON and calls a webhook that will email the reformatted content.

<#

#.SYNOPSIS

#Take an alert passed to this script via a webhook and convert it from JSON to a formatted email structure

#>

param (

[Parameter(Mandatory=$false)]

[object] $WebhookData

)

 

# If runbook was called from Webhook, WebhookData will not be null.

if ($WebhookData) {

write-output "Webhook data $WebhookData"

 

# Logic to allow for testing in Test Pane

 

if(-Not $WebhookData.RequestBody)

{

$WebhookData = (ConvertFrom-Json -InputObject $WebhookData)

write-output "test Webhook data $webhookData"

}

}

 

function CreateObjectView {

param(

$data

)

 

# Find the number of entries we'll need in this array

$count = 0

foreach ($table in $data.Tables) {

$count += $table.Rows.Count

}

 

$objectView = New-Object object[] $count

$i = 0;

foreach ($table in $data.Tables) {

foreach ($row in $table.Rows) {

# Create a dictionary of properties

$properties = @{}

for ($columnNum = 0; $columnNum -lt $table.Columns.Count; $columnNum++) {

if ([string]::IsNullOrEmpty($row[$columnNum])) {

$properties[$table.Columns[$columnNum].name] = $null

}

else {

$properties[$table.Columns[$columnNum].name] = $row[$columnNum]

}

}

$objectView[$i] = (New-Object PSObject -Property $properties)

$null = $i++

}

}

 

$objectView

}

$Spacer = ": "

$linespacer = "</p><p>"

$RequestBody = ConvertFrom-JSON -InputObject $WebhookData.RequestBody

 

if($RequestBody.SearchResult -eq $null){

$RequestBody = $RequestBody.data

}

 

# Get all metadata properties

 

$WebhookName = $WebhookData.WebhookName

write-output "Webhookname is: $($WebhookName | Out-String)"

 

$AlertId = $RequestBody.essentials.alertId

write-output "AlertId is: $($AlertId | Out-String)"

 

$AlertRule = $RequestBody.essentials.alertRule

write-output "AlertRule is: $($AlertRule | Out-String)"

 

$QueryResults = $RequestBody.alertContext.condition.allOf.linkToSearchResultsUI

write-output "QueryResults is: $($QueryResults | Out-String)"

 

# Connect to Azure

Connect-AzAccount -identity

set-azcontext -subscriptionid "<subscription>"

 

$AccessToken = Get-AzAccessToken -ResourceUrl 'https://api.loganalytics.io'

$data = Invoke-RestMethod -Uri $RequestBody.alertContext.condition.allOf.linkToSearchResultsAPI -Headers @{ Authorization = "Bearer " + $AccessToken.Token }

 

write-output "Information found from the linkToSearchResultsAPI $($data)"

 

$SearchResults = CreateObjectView $data

 

write-output "SearchResult is: $($SearchResults | Out-String)"

 

# Get detailed search results

foreach ($Result in $SearchResults)

{

write-output "In search results"

 

$Body = $Result.Body

write-output "Result.body is $($Body)"

#subject

#notificationemail

 

$Body = $Result.Body+"<p>Query: <a href=<code>""+$QueryResults+"</code>">Link</a>"

$Subject = $Result.Subject

$NotificationEmail = $Result.NotificationEmail

$ResourceId = $Result._ResourceId

 

$params = @{"To"="$NotificationEmail";"Subject"="$Client $AlertRuleName for ticket $TicketID – $Source";"Body"=$message}

 

write-output "Subject is: $($Subject | Out-String)"

write-output "Body is: $($Body | Out-String)"

write-output "ResourceId is: $($ResourceId | Out-String)"

write-output "NotificationEmail is: $($NotificationEmail | Out-String)"

 

$params = @{"To"="$NotificationEmail";"Subject"="$Subject";"Body"=$Body;"From"="$NotificationEmail"}

$json = ConvertTo-Json $params

 

write-output "json value is $($json)"

 

#       Call the LogicApp that will send the email

$uri = <URLToLogicApp>

Invoke-RestMethod -Uri $uri -Method Post -Body $($json)

}

Where to create the webhook for the Azure Automation runbook:

Once the runbook has been saved and published in Azure Automation there is an option at the top to “add webhook” as shown in Figure 1 below.

Figure 1: Adding a webhook to a runbook

Adding a webhook to a runbook

Adding a webhook to a runbook

Use the “Create new webhook” as shown in Figure 2.

Figure 2: Creating a new webhook

Creating a new webhook

Creating a new webhook

Give the new webhook a name, make sure it is enabled, set the expiration date, and copy out the URL from the webhook as shown in Figure 3.

Figure 3: Creating a new webhook – specifying the name and expiration

Creating a new webhook – specifying the name and expiration

Creating a new webhook – specifying the name and expiration

Finish the steps to create the webhook. Now that we have the webhook we can add it into the appropriate action group.

How to integrate the webhook for the Azure Automation runbook into an action group:

Now that we have the webhook to call the runbook that we have created, we can now add the integration for the alert in Azure Monitor. The integration requires a rule, an action group, and configuration for the action group to use the webhook. The details on the rule are explained in this previous blog post. To create an action group, open Monitor and open “Action groups” shown below in Figure 4.

Figure 4: Creating an action group

Creating an action group

Creating an action group

Choose the option to create an action group and then specify the configurations required (shown in Figure 5) for the subscription, resource group, action group name and display name.

Figure 5: Configuring an action group

Configuring an action group

Configuring an action group

Choose the action type of Webhook and give it a name and the URL gathered in the previous section of this blog post (shown in Figure 6). Please note, there are other (potentially better) options to choose from the Action type list that may be better options (Automation Runbook or Secure runbook).

Figure 6: Configuring an action group actions

Configuring an action groups actions

Configuring an action groups actions

To complete the integration between the alert and the call to the runbook, assign the Action group to the alert.

Summary: This blog post shows how to create a runbook that will take an existing Kusto generated alert and reformat it into a human-readable format. This is done through leveraging Azure Monitor, alert rules, action groups, Azure Automation and PowerShell called through a webhook. The next blog post will explain how to use a Logic App to parse the JSON provided by the Azure Automation script provided in this blog post.

This blog post will show how to create a custom alert format using a combination of Kusto and Azure Automation. This process is being used to overcome a current inability to generate custom alert formats discussed in the previous blog post of this series. I previously blogged on this topic but the solution has evolved significantly since then.

The solution we are using is built from four pieces: a custom Kusto query, alert notification, Azure Automation, and LogicApps or Azure Automation with SendGrid.

This issue is discussed online here: Configure Email Template for Azure Alerts – Stack Overflow and here can we customize the body content of the Azure alert Emails from code ? – Microsoft Q&A

The previous resolution was that we provided is available here. The previous version of this will not work as Microsoft has changed the format of alert content using the “common alert schema”.

What is the benefit of customizing Azure Alerts?

In the previous blog post, I explained that the default alerting in Azure is great for integration into ticketing systems but is not very human-readable due to the amount of information in the alert (on average 3-4 pages). Through custom alert formatting, we can create an alert with only the required information such as the one below.

Figure 1: Simplified email alert

Simple email alert image

The subject itself lets you know pretty much everything you need to know: The CPU on a specific server is too high (90% in this example), and its consistently too high (100% of the time). The message contents include the same information in separate easy to read fields, and provides a link to the query to get more information. This approach also lets you fully control how you want your alerts to be formatted so you can add, remove or change anything included in the email.

How does this solution work?

The full process we are using is shown below in Figure 2.

Figure 2: Process flow for log analytics and metric based alerts.

Flowchart for alerts

Part 1: The Kusto query

The first part of this solution requires the creation of a Kusto query that not only identifies the condition we are looking for, it also provides the key pieces required to effectively format the alert.

In the sample query below, we define the threshold we are looking for (the CPUPercentage metric needs to be between 90-100% and it needs to be above that timeframe at least 90% of the time). This makes the alert much more actionable as it is indicating a consistently high CPU utilization, not a short spike of CPU utilization. We also use this query to develop a set of fields that will later be used to format the alert. Specifically: NotificationEmail, Subject and Body. Every alert sent via this solution must have these fields defined.

let CounterThresholdMax = 100;

let CounterThresholdMin = 90;

let CounterThresholdPct = 90;

let NotificationEmail = <emailaddress>;

AzureMetrics

| where ResourceProvider == "MICROSOFT.WEB" and MetricName == "CpuPercentage"

| summarize

    Avg = avg(Average),

    OverLimit = countif(Average >= CounterThresholdMin and Average <= CounterThresholdMax),

    PerfInstanceCount = count(Resource),

    PctOver = round(todouble(todouble(((countif(Average >= CounterThresholdMin and Average <= CounterThresholdMax) * 100)) / todouble((count(Resource))))))

    by Resource

| where PctOver > CounterThresholdPct

| extend Subject = strcat("CPU too high on ", Resource, " at an average of ", toint(Avg), "%. Above threshold ", toint(PctOver), "% of the time")

| extend Body = strcat(@"<p>Resource: ", Resource, "</p>", "<p>Average CPU: ", toint(Avg), "</p>", "<p>% CPU over Limit: ", toint(PctOver), "</p>")

| extend NotificationEmail = NotificationEmail

Part 2: Configuring alert notification

The alert needs to be configured in the following ways:

Figure 3: Not splitting by dimensions

Not splitting by dimensions image

Figure 4: Alert logic

Alert logic image

Figure 5: Webhook call to runbook

Image of webhook call to runbook

Part 3: Receiving the alert and processing it (Azure Automation)

This step is accomplished via a PowerShell runbook running in Azure Automation. It is called by the webhook configured in the notification group. Details on this script will be provided in the next post in this blog series.

Part 4: Sending the alert (LogicApps or Azure Automation with SendGrid)

This step is accomplished via a LogicApp or using Azure Automation integrated with SendGrid. The details on the LogicApps option will be provided two posts later in this blog series.

Summary: If you need custom formatted alerts, this is the best method we have found to date. In the next blog post we will showcase the updated Azure Automation runbook designed to receive and process the alert.

In this blog post, we will disassemble the alert structure from Azure for metrics and logs, compare what is included in the alert, and point out challenges in the alert functionality currently available in Azure. In the first part of this blog series, we introduced the new dynamic threshold functionality available in Azure’s monitor.

So, what do the alerts look like?

The answer is that it varies widely based on what type of an alert it is (metric or log analytics based).

Metric-based alert format:

Below is a sample alert format based on what I have seen for this CPU alert for a metric type of alert.

Subject format:

Subject samples:

Body sample: (items in bold below are fields that do not exist in Log Analytics based alerts and are moved to the bottom for readability)

Microsoft has effectively provided all of the potentially relevant content in the email. This is logical as these emails may be sent to ticketing systems, so any relevant fields should be included.

For a metric, the information is pretty much there for what you need to know about an alert condition.

From a usability perspective, the content of the alert has the relevant information, including a link to the alert in Azure monitor, a link to the alert rule, the metric’s value, and how many violations have occurred versus how many periods the alert was examined over.

Log Analytics-based alert format:

Now let’s look at what happens if you decide to use Log Analytics as your data source for an alert. Below is a sample alert format based on what I have seen for this CPU alert for a log analytics type of alert. The subject format is the same as shown below.

Subject format:

Subject samples:

Body sample: (items below in bold are fields that do not exist for Metric based alerts and are moved to the bottom for readability)

The additional fields are related to the alert description, how query results are split, and include links to search results and the query run as part of the alert. The alert does not know what the query is checking for; it just knows that the query is being run and the result sent back from the query.

NOTE: As an example, for a high CPU condition, the alert may not know how high the CPU is, how long it has been that high, and other relevant pieces of information.

Challenges with Azure monitor alerts:

Alert readability: The first big challenge with these alerts is the volume of information that is contained in the alert. The sheer amount of data makes understanding what is in the alert very challenging. In a later blog post, I will show a simplified version of alerting focused on only providing the required information.

Alert Customization: Customization of alerts is not available for either metrics or logs. IE: You cannot suppress specific fields from an alert, add fields to an alert or change the default structure of an alert.

Some excellent data is available to be queried via Kusto using AzureDiagnostics. For example, the ResultDescription field has almost all of the relevant information, but it’s in a format that needs to be parsed to grab specific fields from the column. Below is a sample query to parse the particular fields within this column. We start by identifying the records we need to split out (in our case, those that begin with “Computer”). And then, we project the two required fields (TimeGenerated and ResultDescription). Then we do a parse for the specific pieces of the column that we need to break out.

AzureDiagnostics

| where Category == “JobStreams” and ResultDescription startswith “Computer”

| sort by TimeGenerated

| project TimeGenerated, ResultDescription

| parse-where ResultDescription with * “Computer    :” Computer “\n” *

| parse-where ResultDescription with * “Category    :” Category “\n” *

| parse-where ResultDescription with * “TestGroup   :” TestGroup “\n” *

| parse-where ResultDescription with * “TestName    :” TestName “\n” *

| parse-where ResultDescription with * “Status      :” Status “\n” *

| parse-where ResultDescription with * “Description :” Description “\n” *

| parse-where ResultDescription with * “Message     :” Message “\n” *

| parse-where ResultDescription with * “RunTime     :” RunTime “\n” *

| project TimeGenerated,Computer,Category,TestGroup,TestName,Status,Message,Description,RunTime

Summary: If you need to go through a field that contains multiple values, try out the parse-where functionality! I owe a huge thank you to David Stein who wrote this query. You rock dude!

Warning, this blog post is going way into the weeds of Kusto and Log Analytics.

Let’s say for a minute that you wanted to call http_request_post to post some data to an API. The query below is a two part query. The first part identifies any anomalies in usage data (a good query to check out).

let min_t = toscalar(now(-1d));

let max_t = toscalar(now());

let content = Usage

| make-series statuschanges=count() default=0 on todatetime(TimeGenerated) from min_t to max_t step 1h by Type

| extend (flag_adx, score_adx, baseline_adx)=series_decompose_anomalies(statuschanges, 1.5, -1, 'linefit')

| project timestamp=todatetime(TimeGenerated[-1]), anomaly_score=score_adx[-1], flag=flag_adx[-1];

The second half of this would (in theory) write the content gathered in the first section via a http_request_post. However, it does not work.

Please note: I am using http://www.bing.com just to show a URL, not to actually perform the http_request_post to send the data there.

let uri = "http://www.bing.com";

let headers = dynamic({});

let options = dynamic({});

let content_dummy = "some content here";

let content2 = tostring(toscalar(content));

evaluate http_request_post(uri, headers, options, content2);

The query appears to run but actually it is not working. The issue shown below is identified by the red underline shown below on “content2”.

Kusto with content2 underlined
Kusto with content2 underlined

If you hover over the red underlined section, you see the real hint as to what’s wrong. “Error: The expression must be a constant. content2: string”

Kusto error expression must be constant
Kusto error expression must be constant

What this is telling us is that the http_request_post cannot be used unless the third parameter is a constant. We can verify this by trying the hardcoded version of this query using the “content_dummy”.  Notice the lack of the red underline shown below:

content_dummy with no error
content_dummy with no error

This puts us in a less than optimal situation as http_request_post doesn’t let us write data that isn’t hardcoded. The work-around (thank you to Matt Dowst) is to wrap the http_request_post into a function as shown below.

let min_t = toscalar(now(-1d));

let max_t = toscalar(now());

let content = Usage

| make-series statuschanges=count() default=0 on todatetime(TimeGenerated) from min_t to max_t step 1h by Type

| extend (flag_adx, score_adx, baseline_adx)=series_decompose_anomalies(statuschanges, 1.5, -1, 'linefit')

| project timestamp=todatetime(TimeGenerated[-1]), anomaly_score=score_adx[-1], flag=flag_adx[-1];

let uri = "http://www.bing.com";

let headers = dynamic({});

let options = dynamic({});

let content_dummy = "some content here";

let content2 = tostring(toscalar(content));

let request = (uri:string, headers:dynamic, options:dynamic, json:string){

evaluate http_request_post(uri, headers, options, json);

};

request(uri, headers, options, content2)

In the bolded section above, we have created a function that does the same thing we were trying to do but it runs the http_request_post in a function. If we re-hover over content2 we can now see it is no longer underlined.

No error on content2
No error on content2

Summary: Are you running into “Error: The expression must be a constant” in Kusto? Try wrapping whatever you were trying to run in a function and then calling that function. I owe a huge shout-out to Matt Dowst who provided the way to make this work.