Using Tachyon V8 to enhance employee experience and productivity | Quisitive

While we are now all used to the shift to remote work due to COVID, we are now also looking at an additional shift where employees are voluntarily leaving their jobs (primarily in the United States). These shifts are resulting in a world where it is not enough to provide employees with a job. Large numbers of help wanted signs, and businesses closed due to a lack of staff are obvious examples of this shift. Employees feel like they can get a new job easily, so it is up to employers to find creative ways to hire and retain talent (see the figure at the start of this blog post for a classic example of this).

One of the key methods to retain talent is to provide them with a positive digital employee experience. I recently had the opportunity to get a look into what we can expect for the upcoming release of Tachyon V8. Logically, the Tachyon V8 release continues the shift of focus towards empowering employees through providing employees with a positive digital employee experience. 1E presented several different methods to improve employee experience through Tachyon V8. A few that I found interesting are discussed below (Employee Wellbeing Campaign, Executing a new software rollout adoption plan, Tachyon Welcome).

Employee Wellbeing Campaign

The first topic they raised was the idea of launching an Employee Wellbeing campaign. An employee wellbeing campaign can identify employees who are working on their devices well beyond an 8-hour work day. These employees can be at risk of burnout and can be targeted for specific sentiment surveys to check on the well-being of the employee more directly.

Tachyon already had components available that provided Sentiment surveys and Informational surveys. Tachyon V8 adds “Interactions” and “Announcements”.

The combinations of these four forms of communication (Sentiment surveys, Informational surveys, Interactions, Announcements) provides a platform to identify employees at risk of burnout and help to address these risks.

Executing a new software rollout adoption plan

Effectively rolling out new applications or new application versions is another area where Tachyon V8 brings new capabilities. Instead of just deploying a new application, Tachyon can help you identify the champions and detractors for the application in your organization. Understanding what users are champions of a new application can help to prioritize them early into the software deployment. These champions can also be directed to training would be the most useful for these early adopters. On the flip side of the coin, people who don’t want the software can be de-prioritized for deployment so they can avoid it as long as possible. Their training may be focused on specific new features that would be the most beneficial for them when compared with the previous version.

Based on my experiences, I would expect that these types of approaches would help to decrease the friction on a company for when operating systems must be retired. As an example, eventually Windows 10 will no longer be a supported operating system. This means that Windows 11 will need to be adopted and Windows 10 will need to be decommissioned. A change of this nature is often met with resistance from some end-users. By prioritizing deployment of Windows 11 to internal champions you can leverage these champions to help to positively influence the less enthusiastic employees of the benefits and new features available in the new operating system.

Tachyon Welcome

While it was only discussed briefly, it sounds like “Tachyon Welcome” is designed to address the work-from-anywhere employee onboarding and devices. The solution provides device provisioning or device replacement by augmenting Windows autopilot.

My thoughts (Mergers/Acquisitions & Time Draining applications)

There would appear to be a lot of benefits for companies who are going through mergers or acquisitions. For an acquisition, it would be helpful to identify employees who are performing actions that would indicate that they may be interested in leaving the organization. Specific targeted steps for at-risk employees could help with employee retention. Another area where 1E could be useful in a merger or acquisition is software asset prioritization. Through surveys companies can identify what software solutions the organization uses that are an asset versus those that are a liability. The organization can then focus on integrating the good software solutions and discontinuing the usage of bad software solutions.

Another area that Tachyon V8 interests me is the concept of identifying time draining applications. These could be badly written applications or applications with bad processes (such as a complicated time entry system) or they could be applications that decrease the focus of employees from work (such as social media sites). Identification of these time draining applications can help the organization to identify applications that should be redesigned or applications to consider blocking from being used on company assets.

Additional resources

One of the challenges that we have seen while working with Teams or other video conferencing platforms is a general slowdown of meeting audio when multiple videos are shared such as in a classroom environment or a large company meeting. During most of the day, this isn’t a problem, but as we hit certain points of the day (afternoon most often) there is a significant slowdown that can occur. This blog post will focus on debugging conference call latency issues in Microsoft Teams, but these issues will occur on any video conferencing software platform.

An important part of this to realize is that performance for video sharing can be impacted due to several underly causing conference call latency:

Additionally, it can also be a combination of any of the above which is occurring. To answer a simple question like “Why is my conference call latency poor, causing issues such as choppy audio, or problems seeing the screen share?” isn’t really that simple.

The graphic below shows a simplified version of how each of the attendees connects to a video conference. In most cases, there is someone running the meeting who we will refer to as a presenter (the teacher in a classroom setting). There are also several attendees (the students in the classroom setting). Each of these attendees is connecting to the internet through some manner (cable, fibre optic, ADSL, etc.) which are represented by the lines between the presenter and attendees to the internet. The connectivity is likely to different Internet Service Providers but we will simplify this to show that they are all connecting to the internet somehow. From their internet connection, they are each communicating with the video conference application (Teams in this blog post example).

Graphic 1: How people connect to a video conference when all is working well

Connecting to video conference services to avoid conference call latency

There are a lot of parts that must work to make this whole process work. The presenter and attendees all need to have functional internet connectivity, and the video conference application must be functional and performing effectively. If any problems occur in the diagram below, there will be problems in the video conference.  As an example, if one attendee has a slow internet connection, it will impact their ability to see the video conference (including what is shared on the screen, audio from the presenter, etc.). The slow link is shown in graphic 2 below by changing the color of the link between the attendee and the internet to yellow.

Graphic 2: How people connect to a video conference when one attendee has a slow internet connection

One student connection to avoid conference call latency issue

If the person who is presenting (or teaching) has a slow internet connection it will impact all the attendee’s (students) ability to see what they are sharing on the screen as well as audio and video from the presenter. This is represented in graphic 3 by the yellow line between the presenter and the internet.

Graphic 3: How people connect to a video conference when the presenter has a slow internet connection

Presenter connection issue

If internet service providers are experiencing a slowdown (most likely due to additional network traffic occurring during this outbreak), this will impact all of the attendees of the video conference as shown in graphic 4.

Graphic 4: How people connect to a video conference when the internet service providers or connections are slow

Internet connection issue

Finally, if there is an issue with the underlying video-conferencing application this will also impact all attendees of the video, causing conference call latency issues as shown in graphic 5.

Graphic 5: How people connect to a video conference when the video conference application is slow

Conference call latency: Video conference service issue

How to debug problems during video conferences

The above graphics should show that there are many different things which can cause a problem during a video conference. So how can we debug this situation?

Common issues & resolutions:

Tips & Tricks:

Feedback from a colleague on this blog post

I sent this blog post to David B, who had the following thoughts for consideration (this has been consolidated to specific bullet points):

Configuring an email notification for service incidents

On the Microsoft 365 admin center under preferences, you can set up an email notification if there are health service issues for the services that you are interested in. To set this up open the Service health view and click on Preferences (highlight below).

Conference call latency: Service health preferences

If you enable the checkbox which says “Send me service health notifications in email” you can specify whether to include incidents (which we are looking for in this case).

Preferences - part 1

You can choose what specific services you want to be notified about (Microsoft Teams and SharePoint Online in this example).

Preferences - part 2

This notification should be sent to your technical contact at your organization or to the most technical person in your organization so they can determine if this incident will impact your organization.

Configuring a Teams site to test connectivity

You can create a Teams site which has different pages which will help with debugging connectivity issues. For this Teams site, you can add a webpage that points to one location to check your internet connection and a second webpage that checks your connectivity to Office 365. These provide a quick way to debug what could be causing conference call latency and communication issues.

To configure this, I created a new Team called “Teams Status”. On this team, I used the + sign to add a new tab.

Adding tab in Teams

I created two tabs, one called “Internet Connectivity Test” and one called “Teams Connectivity Test”. For each of these, I added them as a website from the options shown below.

Add a tab - options

For this new tab, you just need to type in the name of the website and add the URL you want it to go to.

Adding a website

Below are screenshots from my two websites that are available directly in Teams so it’s easier to track down what may be causing issues.

If you show more information, it gives more details which can help with debugging connectivity from your location. The URL I added was: In the example below, we can see that my internet speed is 32 Mbps, unloaded connections are at 14 ms and loaded connections are at 595 ms. Unloaded latency is how long it takes to connect when there is not much load through your link to the ISP. Loaded latency is how long it takes to connect when there is a load on the link to your ISP.

ISP connection speed

The Teams Connectivity Test checks the load time to bring up The URL I added was: In the example below, we can see that the load time is 365 ms.

Conference call latency connectivity test to O365

Additional reference:

Summary: Understanding how video conferencing systems work from a high level can help you to debug problems and work around them more quickly. Hopefully, this blog post has given you a quick crash course and has given some tips which will help to make your meetings (or classes) continue to go on without a hitch and avoid conference call latency!

Unless you want to pay for premium connectors like Plumsail to handle permissions in Power Automate, there’s no easy way to work with permissions in your flows. But there is a way to change permissions and permissions levels using the good old “Send HTTP Request to SharePoint”.

I had a requirement to build a site archival solution that once it was approved, it should change the permission level for the Owners group from “Full Control” to “Read”.

So this was our starting point, standard vanilla SP permissions:


Here’s the full flow that is needed to get the job down, we’ll break down each part.


First step is to figure out what our Owners group object is. The first HTTP call will get all the groups on the site, with a filter on: Group Title contains ‘Owner’



Next step is to parse the results that we get back from that HTTP call. There are several good blog posts out there on how to parse JSON so I won’t go into that. The Schema returns only two properties to save on call size:


“properties”: {
“Id”: {
“type”: “integer”
“Title”: {
“type”: “string”

Now we should have a nice and clean JSON containing the group details we need. Next step is the trickier one. To work with permissions levels we need to know the magic numeric values of “roledefid”.

“roledefid” for Permission Levels are:

Full Control: 1073741829
Contribute: 1073741827
Edit: 1073741830
Read: 1073741826

So the first call is to grab the ID from the JSON and assign that as “principalid”. This is the SharePoint Group ID. And then pass “roledefid” to tell it what to add.



Once that is done, it’s almost exactly the same to remove the old permission level. We’ll just make a “remove” instead of “add” call:



That should do it! Once it has run the permissions should look like this:


If you want to tweak this to not only target the owners group, you can easily change the first HTTP call to not have the filter query. Then all groups will be included in your logic.

I’ve been working on the development of what (for me at least) is a pretty complicated Microsoft Flow. This flow provides “intelligent” notification on whether to open or close the windows at my house. I’m doing this as a practical example to better learn Microsoft Flow, to increase energy efficiency, and to get some fresh air through the house. When developing this Flow one major challenge I ran into was effectively testing all the conditions that could apply which would factor into the decision whether to open or close the windows. These decisions were based on weather conditions which change on an hourly basis. This blog post will go through a couple of the approaches that I recommend when developing complicated Flow’s.

If you are interested in previous blog posts related to the technology I’m using to decide whether to open or close the windows check out these blog posts:

The first step I took on debugging this Flow was to add an email notification at the end of every path which the Flow could go down. This included conditions where it made sense to open or close the windows or when it did not make sense to do these steps. This means that whatever path the Flow takes results in an email sent whenever the Flow is run. Below is a sample step taken which logs every variable I’m using in this Flow so that I can manually verify if the Flow performed as I expected it to.

Microsoft Flow

This was an example for “Debug Email 4” which means it was the fourth path in the Flow which resulted in a decision to not open or close the windows. This same step was performed for each of the other paths with the same content (so it’s just a cut and paste with a slightly different name “Debug Email 1” vs. “Debug Email 4” as an example). This approach works really well when developing a Flow but it does generate a lot of email if you are running the Flow regularly.

As a result of the challenge above (lots of email), I added a debug flag to the Flow by initializing a Boolean value called “Debug” which could have a value of “true” or “false”.

Microsoft Flow

Once this value has been initialized, we can use it to determine whether to send the debug emails.

Microsoft Flow

When I need to make changes to the Flow, I change the initialization step to set the Debug to “true” and after saving that change debug emails flow again.

Summary: If you are developing complicated Flow’s, I highly recommend using a Boolean type Debug flag combined with debug emails on the various Flow paths like this blog post shows.

While I was working on an updated Flow I ran into an interesting requirement for my underlying Log Analytics queries. I needed to output a single row of information which contained items which were not specifically related to each other. In this blog post I’ll show a quick trick that I put together which lets me output a single row of data for unrelated data types. For background, here’s what I was attempting to do and why:

For background, I’m working on a Microsoft Flow which provides “intelligent” notification on whether to open or close the windows at my house. This is both for energy efficiency and it’s darn nice to get some fresh air once in a while! I have used this as my use case in these blogs posts:

When I started working with Flow I started by using multiple queries to Log Analytics throughout the Flow but that proved to be extremely difficult to maintain. As a result, I changed my approach so that I run a single query at the start of the Flow which provides all relevant pieces of information back for the Flow. That way I don’t need to run multiple queries during the Flow. At some point in time I plan on presenting on my lessons learned developing what is a pretty darn complex Flow but today’s blog post will focus on a single lesson I learned in the process. For my decision to Open or Close the windows, I need to know multiple data points (each for the specific house location of course).

  1. What is the current state of the windows at the house? Are they open or closed?
  2. What is the current temperature outside? Is it too hot or too cold?
  3. What is the forecasted temperature outside? Is it too hot or too cold?
  4. What is the current weather description? Does it contain indications of rain?
  5. What is the forecasted weather description? Does it contain indications of rain?
  6. Optimally, we would also want to consider:
    1. Current and forecasted windspeed
    2. The current temperature inside the house

As we look at these various items, there may or may not be common attributes which exist in each of these types of queries. In this example, the status of the windows at the house currently only has one field – “WindowState_CL”. That field indicates where the windows are closed or open. An example query for this data and result is below.

| project-away TenantId, SourceSystem, MG, ManagementGroupName, Computer

unrelated data types

I could add a second field which indicates the location but let’s assume for this example that it’s not an option to do so. How can I create an output which contains both the state of the windows and the current and forecasted weather? We can do this by creating our own key which we will then use later in the join. See the code below as an example: (this is a subset of the query that I’m still developing)

Below is what this sample code does: (note, all references to the key are in bold below)

  1. Gathers the most recent record from WindowState_CL and adds a custom “MyKey” field using project.
  2. Determines if the current weather information indicates that it is too cold to open the windows and adds the custom “MyKey” field using an extend.
  3. Joins the two different types of data on the “MyKey” value.
  4. Removes any non-required fields and reformats the data for the final query output.

let WindowsCondition = WindowState_CL
| top 1 by TimeGenerated
| project WindowState = OpenWindow_s, MyKey = "Key";
let place = "abc";
let MinTemp = 55;
let TooCold = OpenWeather_CL
| where TimeGenerated > now(-1day) and tostring(City_s) == place
| project Description_s, Temp_d, TimeGenerated
| sort by TimeGenerated
| top 1 by TimeGenerated
| where Temp_d < MinTemp
| project WeatherCondition = Description_s, CurrentTemperature = Temp_d
| count;
let FinalTooCold = TooCold
| extend MyKey = "Key";
| join FinalTooCold on MyKey
| project-away MyKey, MyKey1
| extend TooCold = Count
| project-away Count

And here’s a sample of the output:

unrelated data types


To output unrelated data types in a Log Analytics query you can use project command or the extend command to create your own key field. This key field can then be used when joining the different types of data. Finally, the field which was used to join the data can be removed from the output of the final query resulting in a successful join of unrelated data.

The flow of updates coming into System Center Configuration Manager is greater than ever.

At times it may seem overwhelming to administrators, and the recent announcement by Microsoft that Configuration Manager current branch 1806 is released to the public, is no different. One of the improvements focuses on the Office 365 Installer feature.

deploy Office 365 ProPlus

The Office 365 Installer feature basically wraps Configuration Manager around the typical process of downloading the Office Customization Tool (OCT) and dealing with .XML files in an editor.  Put another way: ConfigMgr puts a nice GUI form on top of it.  This was a nice feature and many administrators found it very useful.  However, this created a potential concern about (the ConfigMgr development team) keeping up that new form interface and whatever changes the Office development team might decide to implement.  As a result, the process now leverages the OCT from the ConfigMgr console.  This actually makes much better sense, and it frees up the ConfigMgr developers to focus on their end of the integration model.

Here’s one example for building a new Office 365 ProPlus application in Configuration Manager 1806.

  1. Select Software Library, and then select Office 365 Client Management
  2. On the Office 365 Client Management dashboard, scroll over to the right and click the “Office 365 Installer” icon deploy Office 365 ProPlus
  3. On the Application Settings panel, enter the name, an optional description and the UNC path to the content location.  The content doesn’t exist yet, so this will be the path to where it will build the deployment source at the end of the process.  Click Next
  4. The first time you run through the process, it will prompt you to download the Office Customization Tool.  Once you download and extract the OCT content, it will continue on with a button that asks you to select the OCT.
  5. From the OCT Home page, click the “Next” link as shown below…
  6. On the Software and Language Settings, under General, enter the organization name, and click Add.  Then select the 32-bit or 64-bit version to install.
  7. Under the Software section, choose the configuration from the drop-down list, and click Add.  You can also select or deselect individual products within the suite as you prefer.
  8. Under the Languages section, you can select additional language packs to include.  Note: If you forget to click Add on the Software section above, the language features are grayed-out and inaccessible.  You can also control the option to use a CDN as a fallback for missing languages

    For most deployments, the “Match Operating System” option should suffice.  However, you can select other languages as needed.
    deploy Office 365 ProPlus
  9. Next, click on “Installation and Update Settings” on the left side of the form.
  10. From here, you can select the Installation channel, and channel version to install, as well as how to manage Updates.  The channel options are: Semi-Annual Channel, Semi-Annual Channel (Targeted), Monthly Channel, and Monthly Channel (Targeted).  Note: The list of Versions will vary according to which Installation Channel is selected.

    Note the options to remove previous versions, even MSI and Click-to-Run installations.  Very nice!
  11. Next, click on “Licensing and Display Settings” on the left side of the form.
  12. Note that the Product Key section (KMS and MAK) options may be inaccessible and pre-selected, with auto-activation enabled as well.

    The Additional Properties section provides control of Shared Computer ActivationEULA acceptance and the Pin Icons to Taskbar feature (automatically enabled).  Note that the Pin Icons to Taskbar option no longer displays a warning about Windows version limitations.
  13. The Preferences section provides a detailed list of feature options for each of the products in the suite. You can also Search for features by name, if desired.
  14. Finally, be sure to maximize the OCT form as there are additional menu options along the far-right side.  This is where you Submit your configurations to prepare for the deployment build process.  Existing configuration can be modified by clicking the Import option as well.  If you miss something, you will see a nice reddish warning banner and highlighted keys on the form, which help guide you to items that need attention.
  15. Once you have the configuration ready, you will be returned to the Configuration Manager “Office Settings” form panel.  Click Next.
  16. On the Deployment panel, you will be prompted to go ahead and deploy the application to a Collection.  If you prefer, you can build one or more application configurations without deploying them, or you can deploy them as you create them.

    Afterwards, the Summary panel will be displayed, click Next and then after the deployment source is created, click Close.

When the deployment building process is finished, you will see how it populated the folder path you provided at the beginning.  There will be two files in the folder root: configuration.xml and setup.exe, and an office sub-folder, which contains all of the deployment content files.

deploy Office 365 ProPlus

You may also notice that while the process builds a nice application configuration, it does not populate the Application Catalog tab, which controls the information shown in the Software Center (if you choose to deploy via Software Center).  You may want to polish that section up before deploying.

deploy Office 365 ProPlus


This is a very basic overview of how the new Office365 Client Installation process works in Configuration Manager 1806.  Let us know your thoughts.  Thank you!

This blog post series will cover two approaches which can be used to help to customize how alerts are formatted when they come from Azure Monitor for Log Analytics queries. For this first blog post we will take a simple approach to making these alerts more useful – cleaning up the underlying query.

Log analytics

In Azure Monitor, you can define alerts based on a query from Log Analytics. For details on this process to add an alert see this blog post. This blog post will focus on how you can clean up the query results to make a cleaner alert.

Cleaning up query results

For today’s blog item we’ll start with what I would have used normally as a query for an alert:


| where (ObjectName == “Processor”
or ObjectName == “System”) and CounterName == “% Processor Time”

| where CounterValue > 40

| sort
by TimeGenerated desc

The query provides all fields because we aren’t restricting what to return.

Log analytics

This will generate the alert as expected, and here’s the resulting email of that alert.

Log analytics

One of the Microsoft folks pointed out to me that if I cleaned up my query it would clean up the results and therefore it would clean up the email which is being sent out (thank you Oleg!). We can take a first stab at cleaning this up by just restricting which fields we are returning with a project statement:


| where (ObjectName == "Processor"
or ObjectName == "System") and CounterName == "% Processor Time"

| where CounterValue > 40

| project Computer, CounterValue, TimeGenerated

Here’s the resulting email of the new alert:

Log analytics

For queries which are run directly in the portal, we can clean this up further by adding some extends which provide information on our various fields.


| where (ObjectName == "Processor"
or ObjectName == "System") and CounterName == "% Processor Time"

| where CounterValue > 40

| extend ComputerText = "Computer Name"

| extend CounterValueText = "% Processor Utilization"

| extend TimeGeneratedText = "Time Generated"

| project ComputerText, Computer, CounterValueText, CounterValue, TimeGeneratedText, TimeGenerated

A sample result is below:

Log analytics

This is the query that we will use for the actual email alert, but we’ll showcase one more example in case it’s helpful. We can even move this to more of a sentence format for alerts such as this:


| where (ObjectName == “Processor”
or ObjectName == “System”) and CounterName == “% Processor Time”

| where CounterValue > 40

| extend CounterValueText = “% Processor Utilization”

| extend Text1 = ” had a high “

| extend Text2 = ” of “

| extend Text3 = ” at “

| extend Text4 = “.”

| project Computer, Text1, CounterValueText, Text2, CounterValue, Text3, TimeGenerated, Text4

A sample result for this query is below:

Log analytics

If we compare the original alert to the new alert side by side shows how much this one simple change can make to clean up and make your alerts more useful: (Original on left, new on right)

Log analytics
Log analytics

The alert on the right-hand side helps to remove the clutter by decreasing the number of fields shown in the results section of the email. It also shortens the email by about a third making it easier to find what you are looking for as you can see based on the examples above.


If you want to make your current alerts more useful, use a project command to restrict the fields which are sent in the email (IE: Clean up the Log Analytics query). This is quick to put in place and results in a much more readable email alert.

P.S. The email above on the right is, however, a long way from my optimal email format, shown below:

Log analytics

The above approach to alerting from Log Analytics we will cover in the next blog post in this series! Would you like to know more? Get in touch with us here.

In the previous blog post we created a query in Log Analytics which determined whether or not it makes sense to open the windows in a house. In this blog post we will use Microsoft Flow to run the query which we scheduled on an hourly basis. And we can perform tasks based upon the results of the query (including writing back different data into Log Analytics).

To build this we will work through the following steps:

Creating a new Flow:

In Flow it’s easy to create your own Flow’s from an existing template or to import an existing flow. For this example we are going to use the “Create from blank” option.

We skip past the common items and instead use the option to “Search hundreds of connectors and triggers”

Next we search for “Schedule” as this is how we’ll get it to schedule our Flow to run.

Scheduling the Log Analytics query to run in Microsoft Flow:

For this example we schedule the recurrence to run hourly (remember, in the previous blog post we designed the query so it can exclude specific hours of the day). We also set the time zone.

Running the query:

We can add an action below the recurrence to query log analytics. For this one we search on Log Analytics.

We are going to use “Run query and list results”. Specify your subscription, resource group and workspace name and transfer in your Log Analytics query.

Setting up a condition based on the query:

Now we add a condition based on the results of the query. If our WeatherFlag = 2 this indicates that we should open the windows. Otherwise we should not.

Send an email (diagnostic):

While I was developing this I put an action under both the success and failure conditions to send an email so that I knew that the job had run and had made it this far.

Each of these were basically identical except for indicating the subject and body contents. The yes option (IE open the windows) is below.

The no option (IE don’t open the windows) is below:

NOTE: Once debugging of the Flow has been completed, the no option should be removed to avoid spamming the email address on an hourly basis when it is not time to open windows. Additionally the yes email option will also probably be removed as the approval process would also be sending an email.

Acknowledging if the task was done:

Next we use an approval step to indicate whether the option was or was not chosen to open the windows (again, this will hopefully be used when I’m writing the “Close the windows” part of this blog series.

Was the approval accepted?

We add a new condition after the start of the approval which checks for the results of the approval.

So that if the response is positive it will continue on to the final step in this version of the Flow.

Writing status back to Log Analytics:

Finally (at least for this blog post), we write back to Log Analytics indicating that the windows were opened. We use the “Send Data” capability for this.

We need to create a simple JSON file which just indicates that the action was taken to open the window. This is the simplest JSON I could imagine J


“OpenWindow”: 1


And we assigned it a custom log name (WindowsState) which should appear in Log Analytics as WindowsState_CL. Here’s a sample of the resulting data in Log Analytics.

What is the end result in Flow?

The end result in Flow gives us a scheduled task which runs a query. That branches into two conditions. The success condition sends an email and then an approval. If the approval is positive it then writes data back into Log Analytics indicating that state. The graphic below shows the high level for how the flow works.

A note on debugging an issue writing to Log Analytics:

I did run into one error when trying to write to Log Analytics where it was consistently returning a status code of 500 (shown below).

The resolution was to define the workspace key on the connection for Azure Log Analytics Data Collector. (Thank you to Donnie!)

Summary: It really simple to develop some pretty complex processes using Flow! Using Microsoft Flow we can not only schedule queries against Log Analytics, we can also take actions based on the results of the query, send notifications and approvals, and even write back results into Log Analytics! In the next blog post of this series we’ll go into how to debug complex queries in Log Analytics.

This series will introduce some tricks and tips for writing more complex queries in Log Analytics and integrating these queries into Microsoft Flow. In this blog post I will showcase an example of how to build a query composed of multiple sub-queries.

The example used for this blog post series will cover what on the top level appears to be a simple question: “Should I open the windows?” At first glance this seems to be a pretty easy item to determine, but once you peel this particular onion it’s more complicated than it looks on the surface. Here’s the first layer to the question:

  1. Is it currently colder outside than it most likely is within the house? (I live in Texas so this example won’t even consider opening up any windows to warm up a house)
  2. Is it raining outside? (Yes, we actually do get rain in Texas!)

But that’s just the first layer. We also need to consider the following:

  1. Is the outside temperature still colder outside than it is within the house? (IE: is it going to stay cold long enough to justify opening the windows)
  2. Is it forecasted to rain outside in the next few hours? (IE: is it not going to rain for long enough to justify opening the windows)
  3. Finally, notification for this should only occur during hours that someone is actually awake and willing to take action on it. For this example we’ll assume no notifications before 7 am and no notifications after 10 pm.

In this blog post series we will unpack the above and show you how you can use Log Analytics to break down a complex query of this nature. Please note, to get the weather data required see this blog post and to get the weather forecasting data see this blog post. In this blog item we will create the query by creating a series of sub-queries for Current Weather and Weather Forecast, assemble these together and then apply an approach to cause it to only fire during specific hours.

Current Weather:

We’ll start this with a query which gathers the first two items listed above. This query identifies the current weather state for temperature (Temp_d) and weather description (Description_s).

let place = "Frisco"; // string


| where TimeGenerated > now(-1day) and tostring(City_s) == place

| project Description_s, Temp_d, TimeGenerated

| sort by TimeGenerated | top 1 by TimeGenerated

With sample results shown below:

Note a couple of tricks in the above query. First, we’re using the “let” statement to define variables that we will use in the query. Second, we’re limiting our results to only data generated in the last day to minimize the amount of data gathered and to maximize the speed of the query. Third, we are limiting our query to the most recent record using “top 1”.

Now we need to know what options are available for the description field so that we can identify those which indicate rain. The following query looks over time to find what weather conditions have occurred.

OpenWeather_CL | project Description_s | distinct Description_s | sort by Description_s asc

With sample results shown below:

From this set of data, it looks like we are looking to avoid any descriptions which contain “rain” or “drizzle”. And we want to exclude any temperatures outside which are greater than a certain minimum temperature and which are smaller than a certain maximum temperature. The query below shows an example:

let place = "Frisco"; // string

let MinTemp = 65;

let MaxTemp = 89;


| where TimeGenerated > now(-1day) and tostring(City_s) == place

| project Description_s, Temp_d, TimeGenerated

| sort by TimeGenerated | top 1 by TimeGenerated

| where Description_s !contains "Drizzle" and Description_s !contains "Rain" and Temp_d < MaxTemp and Temp_d > MinTemp

Sample results for a successful condition are shown below:

Now we need to take this condition and make a single variable that reflects the current weather in a yes/no type flag to reuse in our final query (in this case a 0 or 1). We can do this easily by adding a count to the end of the query.

let place = "Frisco"; // string

let MinTemp = 65;

let MaxTemp = 76;

let CurrentWeather = OpenWeather_CL

| where TimeGenerated > now(-1day) and tostring(City_s) == place

| project Description_s, Temp_d, TimeGenerated

| sort by TimeGenerated | top 1 by TimeGenerated

| where Description_s !contains "Drizzle" and Description_s !contains "Rain" and Temp_d < MaxTemp and Temp_d > MinTemp | count;


The above example uses a let to define CurrentWeather and note the bolded CurrentWeather above which shows us the results of the variable that we defined.

Sample results for a successful condition are shown below:

Weather Forecast:

We can take what we built for the current weather and apply the same concepts for weather forecasting by using OpenWeatherForecast_CL in place of OpenWeather_CL.

let place = "Frisco"; // string

let MinTemp = 65;

let MaxTemp = 76;

let ForecastWeather = OpenWeatherForecast_CL | sort by ForecastTimeDate_t asc

| where ForecastTimeDate_t > now() and ForecastTimeDate_t < now(+4hours) and tostring(City_s) == place

| project ForecastDescription_s, ForecastTemp_d, ForecastTimeDate_t | top 1 by ForecastTimeDate_t

| where ForecastDescription_s !contains "Drizzle" and ForecastDescription_s !contains "Rain" and ForecastTemp_d < MaxTemp and ForecastTemp_d > MinTemp | count;


An example success result looks just like our success condition for CurrentWeather.

Bringing the queries together:

To bring these queries together we remove the spaces between the queries and then we remove the duplicate copies of the variables that we defined for place, MinTemp, and MaxTemp. Next we create a join on the results where we use extend to create a WeatherFlag. If the WeatherFlag = 2 it indicates a success on both conditions (since there are two conditions for success/fail – each one having a successful value of 1 and a failure value of 0).

let place = "Frisco"; // string

let MinTemp = 65;

let MaxTemp = 76;

let CurrentWeather = OpenWeather_CL

| where TimeGenerated > now(-1day) and tostring(City_s) == place

| project Description_s, Temp_d, TimeGenerated

| sort by TimeGenerated | top 1 by TimeGenerated

| where Description_s !contains "Drizzle" and Description_s !contains "Rain" and Temp_d < MaxTemp and Temp_d > MinTemp | count;


let ForecastWeather = OpenWeatherForecast_CL | sort by ForecastTimeDate_t asc

| where ForecastTimeDate_t > now() and ForecastTimeDate_t < now(+4hours) and tostring(City_s) == place

| project ForecastDescription_s, ForecastTemp_d, ForecastTimeDate_t | top 1 by ForecastTimeDate_t

| where ForecastDescription_s !contains "Drizzle" and ForecastDescription_s !contains "Rain" and ForecastTemp_d < MaxTemp and ForecastTemp_d > MinTemp | count;


let WeatherConditions = CurrentWeather | join (ForecastWeather) on Count

| extend WeatherFlag = Count + Count1

| project WeatherFlag;


A sample successful output (IE: Open the Windows) is shown below:

A sample failure output (IE: Don’t open the Windows) is shown below:

Scheduling within the query:

Now we want to schedule this query to run every hour during certain hours of the week. Alerting was recently moved to Azure but we can’t use alerting in Azure to accomplish this yet due to the options which are currently available (see the screenshot below). Currently we can only specify frequency, we can’t also specify what hours not to run within that frequency.

In Microsoft Flow we can currently set a recurrence for a query in Log Analytics but here we can’t specify it to only run during certain hours of the day either.

To work around this, we’ll add the scheduling capabilities directly into the query we have created in this blog post. To add this we’ll go back to a previous blog I wrote on how to restrict on time. To do this we add variables for startDateTime, StartNotification and StopNotification. These variables are then used at the end of the query to restrict the final results to only provide data during those timeframes. The final query is shown below.

let startDatetime = startofday(now());

// StartNotification is 8:00 am (8) plus 6 hours for the UTC Offset (14)

let StartNotification = startDatetime + 14hours;

// StopNotification is 5:00 pm (17) plus 6 hours for the UTC Offset (23)

let StopNotification = startDatetime + 23hours;

let place = "Frisco"; // string

let MinTemp = 65;

let MaxTemp = 76;

let CurrentWeather = OpenWeather_CL

| where TimeGenerated > now(-1day) and tostring(City_s) == place

| project Description_s, Temp_d, TimeGenerated

| sort
by TimeGenerated | top
by TimeGenerated

| where Description_s !contains
and Description_s !contains
and Temp_d < MaxTemp and Temp_d > MinTemp | count;


let ForecastWeather = OpenWeatherForecast_CL | sort
by ForecastTimeDate_t asc

| where ForecastTimeDate_t > now() and ForecastTimeDate_t < now(+4hours) and tostring(City_s) == place

| project ForecastDescription_s, ForecastTemp_d, ForecastTimeDate_t | top
by ForecastTimeDate_t

| where ForecastDescription_s !contains
and ForecastDescription_s !contains
and ForecastTemp_d < MaxTemp and ForecastTemp_d > MinTemp | count;


let WeatherConditions = CurrentWeather | join (ForecastWeather) on Count

| extend WeatherFlag = Count + Count1

| extend CurrentTime = now()

| project WeatherFlag, CurrentTime;


| where CurrentTime > StartNotification and CurrentTime < StopNotification

Summary: This blog post showed some tricks to use when building more complex queries in Log Analytics. Some of my recommendations from these include:

In the next blog post we’ll use Microsoft Flow to run this query on a regular basis, how to send emails based on the results and how to use approvals to change dat

In the previous blog post I discussed how to extend your Log Analytics alerts in Azure. Once you are extended into Azure there are two methods available to create new alerts which we will discuss in this blog (the easier one is via Log Search the other is in Monitor / Alerts).

Creating an alert from Log Search in Azure

The easy approach to create a new alert is to open Log Search in Azure as part of Log Analytics. To do this, open Log Analytics in Azure.

Then open the name of your workspace.

And then open up Log Search.

Paste in your favorite alert query from Log Analytics and then run it.

Once the query has been run you can choose the option to create a “New Alert Rule” as shown below.

This benefit to this approach is that it pre-populates the alert condition with the correct alert target and the alert criteria (you may need to tweak the alert criteria from your original alert).

Define alert condition

Next you define the alert details including the alert rule name (which cannot contain several character types per this message), the description, severity, and whether or not to enable the rule on creation and whether or not to suppress alerts.