Welcome to the “Introducing” series. In the previous blog post, we introduced Azure costing. In this blog post, we will introduce one of the services that are available in Azure: Log Analytics.
What are Logs & Metric Data?
In the Introducing IT Organizations section of this blog post series, we mentioned the underlying resources that need to be managed and maintained: Compute, Storage, and Network. Each of these types of resources has both events which occur related to them and performance indicators that tell how they are currently performing.
Event Logs: In Windows, Events can be informational, Warnings, Errors, or audit success/audit failure. Every time a user logs into a computer an event is written that indicates who logged on and when they logged on. If you are on Windows and you want to see what events are occurring on your system, open “eventvwr” and look around! (open start, and type eventvwr).
Metrics: Resources also write performance information regularly providing notifications as to how that resource is performing. If we take a compute resource (like your system), performance counters are gathered for the % processor time, disk queue, % free space, and a plethora of other counters. If you are on Windows and you want to see what events are occurring on your system, open either “taskmgr” (on the performance tab) or “perfmon” and see what is out there!
As an IT administrator, logs and metrics are key to understanding the health of an IT organization’s resources. Most organizations have hundreds (or thousands) of computers (including servers), as well as significant amounts of storage devices and network devices. With every computer, storage device and network device constantly writing both logs and metrics it should be obvious that one person cannot effectively watch all these directly by using tools like the ones mentioned above (eventvwr, taskmgr, and perfmon). Gathering all the relevant information and making it available in a single repository is where solutions like Log Analytics come into play.
What is Log Analytics?
Log Analytics provides a cloud-based repository where you can collect and store the events and performance information from your IT environment in a single location. This location is called a “workspace” in Log Analytics. Having this data in a single location makes it easier to see conditions that span multiple resources in an IT infrastructure. There are several benefits to gathering this data into a single location.
Root cause analysis: One of the common challenges in monitoring systems is to be able to effectively identify when a single resource is impacting the health of other resources. Let’s take a situation where a customer has a remote location that is connected by a dedicated network link. On one side of that link are the majority of the compute and network resources. On the other side of the link are several compute resources (servers). If the network link goes down, monitoring systems will indicate that both the network is down and that the servers on the other side of the link are down. In reality, the servers on the other side of the link are most likely not down but since the monitoring system cannot communicate with those resources it has to assume that they are offline. By writing the event and performance data from all of the resources into a single repository we can more easily identify that the root cause of this issue is the loss of the network link, not the loss of the network link and the servers on the other side of the link.
Identifying trends & forecasting: Another benefit to having a central repository for all this information is to identify trends across multiple systems. If a website runs across multiple compute resources, we can look at the performance counters on those resources to see what their utilization looks like. With enough historical information in place we can use that information to identify a trend (IE: is the website usage going up or down, is the CPU usage on the servers running the website going up or down), and we can forecast based on that same information to project forward. We could see that if the current set of compute resources will run out of available CPU in 90 days, then we should spin up an additional webserver (or increase the CPU resources available across the existing set of resources).
Creating custom logs: Log Analytics can be used to gather any type of data that you are interested in writing into it. As an example, we use Log Analytics to gather information for all of the automation steps that we do within our custom automation solution at Catapult. To learn Log Analytics I developed a method to gather solar electricity generation and to combine that with weather forecasting data to provide an estimated electrical generation based on weather conditions. With help from a friend, we even front-ended this via an Echo Show! These examples showcase how you can use Log Analytics to store any type of data that you want to store within it.
How to get data into Log Analytics
Data can be gathered by Log Analytics through a variety of methods including the following:
Windows and Linux Agents: The Log Analytics agent can be deployed to windows computers (client or server-based operating systems) and Linux systems to send data to Log Analytics by installing the agent and configuring a workspace ID and key.
System Center Operations Manager: If your organization has an existing instance of System Center Operations Manager (SCOM), it can be integrated with Log Analytics to send relevant data.
Azure-based resources: Azure Virtual Machines, Activity Logs, and Storage account log information can be easily configured to send data to Log Analytics.
Rest API: Log Analytics has a REST API-based approach to ingesting data. We commonly use a PowerShell script running in Azure Automation which calls the REST API to send data to Log Analytics.
Flow/LogicApps: You can even use no-code solutions such as Flow or LogicApps to write data to Log Analytics.
How is Log Analytics priced?
Log Analytics is priced based on the amount of data that you send to Log Analytics, how long that data is retained, and how much data is exported. For details on pricing check out the pricing calculator in the Azure Monitor section: Pricing – Azure Monitor | Microsoft Azure. This blog post also provides my top 10 approaches to minimizing writing non-required data to Log Analytics.
Series Navigation:
- Go back to the previous article in the series: Introducing Azure Costing
- Continue to the next in this series: Introducing Kusto