×

Monthly Archives

LogicMonitor User Group in Los Angeles March 7

Posted by & filed under Trade Shows, Uncategorized .

One week from today, we’ll be in downtown LA for our first LogicMonitor User Group on March 7 (rsvp online).

Our fearless founder Steve will be presenting our latest releases and talking through our API’s and new functionality such as Netflow and some roadmap ideas.  You’ll also get the chance to rub elbows with other LogicMonitor customers to swap best practices on infrastructure monitoring, server monitoring, performance tuning, virtualization, etc.

LA Picture Downtown241

We’ll meet at Pitfire Pizza in downtown LA and we’ll be supplying the pizza and beer.  Nothing says good pizza like fake tatoo’d fingers.

woodfire

We’d love you to see you there – signup here.

 

Tags:

A not uncommon question from our customers, or even from our own support people, is “Why does monitoring a Windows system running on VMWare report different CPU data than monitoring the virtual machine from the ESXi host? The ESX monitoring must be wrong!”

For example, here is LogicMonitor graphing the CPU load of a Windows system running as a Virtual Machine on ESXi. In this case, the CPU is gathered from WMI, by querying the Windows OS:
CPU load of a Windows system running as a Virtual Machine on ESXi

Here is the same machine at the same time, but this is how ESXi sees the load: Read more »

This post, written by LogicMonitor’s Director of Tech Ops, Jesse Aukeman, originally appeared on HighScalability.com on February 19, 2013

If you are like us, you are running some type of linux configuration management tool. The value of centralized configuration and deployment is well known and hard to overstate. Puppet is our tool of choice. It is powerful and works well for us, except when things don’t go as planned. Failures of puppet can be innocuous and cosmetic, or they can cause production issues, for example when crucial updates do not get properly propagated.

Why?

In the most innocuous cases, the puppet agent craps out (we run puppet agent via cron). As nice as puppet is, we still need to goose it from time to time to get past some sort of network or host resource issue. A more dangerous case is when an administrator temporarily disables puppet runs on a host in order to perform some test or administrative task and then forgets to reenable it. In either case it’s easy to see how a host may stop receiving new puppet updates. The danger here is that this may not be noticed until that crucial update doesn’t get pushed, production is impacted, and it’s the client who notices.

How to implement monitoring?

Monitoring is clearly necessary in order to keep on top of this. Rather than just monitoring the status of the puppet server (a necessary, but not sufficient, state), we would like to monitor the success or failure of actual puppet runs on the end nodes themselves. For that purpose, puppet has a built in feature to export status info Read more »

Tags:

Categories
Popular Posts
Subscribe to our blog.