×

Archives Uncategorized

In a prior blog post, I talked about what virtual memory is, the difference between swapping and paging, and why it matters. (TL;DR: swapping is moving an entire process out to disk; paging is moving just specific pages out to disk, not an entire process. Running programs that require more memory than the system has will mean pages (or processes) are moved to/from disk and memory in order to get enough physical memory to run – and system performance will suck.)

Now I’ll talk about how to monitor virtual memory, on Linux (where it’s easy) and, next time, on Solaris (where most people and systems do it incorrectly.) Read more »

Share

Don’t Get Trapped by Traps

Posted by & filed under Uncategorized .

[Originally appeared April 30, 2014  in the Packet Pushers online community, written by Steve Francis, Founder and Chief Product Officer with LogicMonitor. This is a continuation of the 'SNMP traps' blog post previously posted on the LogicMonitor blog.]

LogicMonitor is a SaaS-based IT infrastructure monitoring company, monitoring the performance, capacity and availability of thousands of different kinds of devices and applications for thousands of customers. Where possible, we don’t rely on SNMP traps – and neither should you.

2014-05-07_10-59-14

Firstly, consider what a trap is – a single UDP datagram, sent from a device to notify you that something is going wrong with that device. Now, UDP (User Datagram Protocol) packets are, unlike TCP, not acknowledged, and not retransmitted if they get lost on the network and don’t arrive, since the sender has no way of knowing if it arrived or not. So, a trap is a single, unreliable notification, sent from a device at the exact time that a UDP packet is least likely to make it to the management station – as, by definition, something is going wrong. The thing going wrong may be causing spanning tree to re-compute, or routing protocols to reconverge, or interface buffers to reset due to a switchover to redundant power supply. Not the time to rely on a single packet to tell you about critical events. Traps are just not a reliable means to tell you of things that can critically affect your infrastructure – this is the main reason to avoid them if possible.

Read more »

Share

What is virtual memory, anyway?

Posted by & filed under Uncategorized .

I was going to write about the differences between monitoring virtual memory on Linux and Solaris, but figured it would be beneficial to start with some fundamentals about what virtual memory is, and the differences between Solaris and Linux.LogicMonitor_-_prod3la_-_Dashboard

Virtual memory is used by all current operating systems. It simply means that the memory address a program requests is virtualized – not necessarily related to a physical memory address.  The program may request the content of memory address 1000; the computer looks at where the current map for address 1000 is pointing, and returns the contents of that address. (Virtual address 1000 may be mapped to physical address 20.)  Of course, if the memory is virtualized, that means that there is no need to actually have physical memory on the server for every address the programs think they can address – so the programs can believe the server to have more memory than it physically does.  Virtual memory addresses do not need to be mapped to anything until they are allocated by a program.  And a virtual memory address can be mapped not to a physical block of memory for storage, but to a block on a hard drive. Read more »

Share

[Originally appeared April 16, 2014 on the OneLogin blog; guest blog post written by Annie Dunham, Director of Product Management, LogicMonitor.]

IT managers either adopt a DevOps philosophy or think it’s passé. Either way, it’s hard to argue with the foundational principle that IT automation isn’t just a trend but rather a key tenant of today’s IT Ops environment. When done right, automation brings efficiency to IT teams.

Automation

At LogicMonitor  – OneLogin’s newest integration partner that offers infrastructure performance monitoring via it’s SaaS-based platform — it’s often said that “the best way to get a promotion is to automate your way out of a job.” You may wonder what the reasoning is for this and it’s simple: automating manual tasks is fundamental to the LogicMonitor product philosophy. A team that monitors thousands of data points each day are also testing new data centers, adding equipment vendors, and performing the laundry list of daily IT Ops responsibilities. Efficiency isn’t nice to have — it’s a requirement!

Our integration with OneLogin reduces the time required for user management configuration by over 50%, as the integration completely removes the need to manage users and security policies within LogicMonitor.

Read more »

Share

Duplicate Alert Suppression (hooray!)

Posted by & filed under Uncategorized .

[Written by Chris Morgan, Senior Solutions Engineer at LogicMonitor]

At LogicMonitor, our monitoring philosophy is to provide customers with actionable intelligence. Great examples of actionable intelligence are the alerts we send you about performance issues in your IT infrastructure. Providing meaningful performance and health metrics is our bread and butter, but we want to avoid overwhelming you with alerts as overload often results in apathy, defeating the original purpose of monitoring.

Consider the case of when a Windows Server running SQL database receives a credential change.  Any new client request to that server will then fail, and with every failure a Window Event will trigger. When your server has an issue and 100 different clients are trying to access it unsuccessfully, you’ll see an event, and an alert, for each and every failure.  This quickly becomes overwhelming, and you’ll probably turn off EventSource alerting to avoid the alert storm.  Your frustration in this case would be understandable – a single Windows Server can be responsible for thousands of event alerts in a very short time period.  But turning off event alerting has potentially dire consequences: you can miss crucial events you actually need alerting on, so you’re throwing the baby out with the bath water.

Read more »

Share

While LogicMonitor is great at identifying issues that need attention, sometimes figuring out what exactly the solution is can be a bit harder, especially for network issues.  One relatively common case – an alert about failed TCP connections.  Recently, one of our servers in the lab triggered this alert:

The host Labutil01 is experiencing an unusual number of failed TCP connections,
probably incoming connections.
There are now 2.01 per second failed connections, putting the host in a warn level.
This started at 2014-02-26 10:54:50 PST.
This could be caused by incorrect application backlog parameters, or by 
incorrect OS TCP listen queue settings.

 

OK – so what is the next step? Read more »

Share

While walking our dogs, I often catch up on podcasts. On Planet Money episode number 352 – The High-Tech Cow, they lay out 4 rules for the success of a business in this constantly changing economy.  white-cowThese rules are:

  1. Stay on top of technological change
  2. Focus (and specialize) on those things you do best
  3. Buffer yourself against the unexpected changes that are even now coming
  4. Find some product or service that you can give your customer no one else can.

Planet Money illustrates these rules in the context of a dairy farm, but I suggest you consider them as they apply to your IT department, too. Read more »

Share

To paraphrase Oscar Wilde – there is only one thing worse than having no monitoring. And that is having monitoring. Or at least that can be the case when you have too many monitoring systems.

LogicMonitor was recently at the Gartner Data Center conference in Las Vegas. The attendees were somewhat larger enterprises (think General Motors) than the majority of our customer base, but shared many of the same goals – and problems – of smaller enterprises. One problem smaller enterprises do not share was the degree of proliferation of monitoring systems, and the problems this causes. Some companies had over 40 monitoring systems in place (more than one hundred for a few) – and all the commensurate silos that go with them. This means for non-trivial problems, resolving an issue often means getting many people into a war room, so the issue can be investigated and traced across the many monitoring systems, by the many people in all their fiefdoms.silos

There was an informal consensus that when a problem involves multiple silos, resolution was at least 3 to 4 days, as opposed to hours when it didn’t.  This makes running multiple monitoring systems (which help create silos of operational people) a very expensive proposition. At LogicMonitor we often help companies consolidate from 10 or 12 monitoring systems to LogicMonitor plus one or two others, but the benefits in consolidating 40 or more walking dead monitoring systems would be huge.

Some of the other more interesting observations from the conference talks: Read more »

Share

[Kevin McGibben (CEO), Steve Francis (Founder and Chief Product Officer) and Jeff Behl (Chief Network Architect) contributed to this post.]

This week LM’s Chief Network Architect “Real Deal Jeff Behl” was featured on the DABCC podcast with Doug Brown.  The interview journey covered lots of ground and sparked our interest about IT industry predictions for 2014. There are so many exciting things happening in IT Ops these days it’s hard to name just a few.

Before it’s too late, here’s our turn at early year prognosticating.

1)   2014 is (at long last) the year for public Cloud testing.  The definition of what “Cloud” is depends on whom you ask. To our SaaS Ops veterans, it means a group of machines running off premise for which someone else is responsible for managing.  Given Cloud can mean lots of things — from public Cloud infrastructure (Amazon), Cloud services (Dyn or SumoLogic) to Cloud apps (Google Apps) to SaaS platforms (SalesForce and LogicMonitor!). The shared definition among all things Cloud is simple: it’s off premise (i.e., outside your data center or co-lo) hosted infrastructure, applications or services. For most enterprises currently , Cloud usually represents a public data center, offering from the very generic VM compute resources to specific services such as high performance NoSQL databases and Hadoop clusters.  Enterprises are starting to gear up to test how the public Cloud fits in its data center strategy. In the past month alone, several of our Fortune 1000 clients confirmed they’ve set aside 2014 budget and IT team resources to test public cloud deployments.

Read more »

Tags:

Share

It is relatively well understood in development that dead code (code that is no longer in use, due to refactoring, or changes in features or algorithms) should be removed from the code base. (Otherwise it introduces a risk of bugs, and makes it much harder for new developers to come up to speed, as they have to understand the dead code, and if it is in fact in use, etc.)  It is less well understood that the same principles apply to the rest of the IT infrastructure as well.

walking-dead-ad Read more »

Share
Categories
Popular Posts
Recent Posts
Top Tags
Archives