In a prior blog post, I talked about what virtual memory is, the difference between swapping and paging, and why it matters. (TL;DR: swapping is moving an entire process out to disk; paging is moving just specific pages out to disk, not an entire process. Running programs that require more memory than the system has will mean pages (or processes) are moved to/from disk and memory in order to get enough physical memory to run – and system performance will suck.)
Now I’ll talk about how to monitor virtual memory, on Linux (where it’s easy) and, next time, on Solaris (where most people and systems do it incorrectly.) Read more »
Before the July 4th holiday, we had the opportunity to host our second LogicMonitor Monitoring Roundtable.
During this session, Mike Aracic, a senior datasource developer here at LogicMonitor, gave us insight into creating datasources for your environment and provided some resources for further education. Read more »
We’ve launched a new program here at LogicMonitor to help you get insight from us and from your compatriots at different corporations working in different positions solving complexities and issues with LogicMonitor. Here at LogicMonitor, we are referring to this fledgling program as the Monitoring Roundtable. We are looking to have one of these every month with invitations extended by your account managers. Of course, you are welcome to be proactive and reach out to us or to your account manager directly for an invitation. Read more »
[Originally appeared April 30, 2014 in the Packet Pushers online community, written by Steve Francis, Founder and Chief Product Officer with LogicMonitor. This is a continuation of the ‘SNMP traps’ blog post previously posted on the LogicMonitor blog.]
LogicMonitor is a SaaS-based IT infrastructure monitoring company, monitoring the performance, capacity and availability of thousands of different kinds of devices and applications for thousands of customers. Where possible, we don’t rely on SNMP traps – and neither should you.
Firstly, consider what a trap is – a single UDP datagram, sent from a device to notify you that something is going wrong with that device. Now, UDP (User Datagram Protocol) packets are, unlike TCP, not acknowledged, and not retransmitted if they get lost on the network and don’t arrive, since the sender has no way of knowing if it arrived or not. So, a trap is a single, unreliable notification, sent from a device at the exact time that a UDP packet is least likely to make it to the management station – as, by definition, something is going wrong. The thing going wrong may be causing spanning tree to re-compute, or routing protocols to reconverge, or interface buffers to reset due to a switchover to redundant power supply. Not the time to rely on a single packet to tell you about critical events. Traps are just not a reliable means to tell you of things that can critically affect your infrastructure – this is the main reason to avoid them if possible.
I was going to write about the differences between monitoring virtual memory on Linux and Solaris, but figured it would be beneficial to start with some fundamentals about what virtual memory is, and the differences between Solaris and Linux.
Virtual memory is used by all current operating systems. It simply means that the memory address a program requests is virtualized – not necessarily related to a physical memory address. The program may request the content of memory address 1000; the computer looks at where the current map for address 1000 is pointing, and returns the contents of that address. (Virtual address 1000 may be mapped to physical address 20.) Of course, if the memory is virtualized, that means that there is no need to actually have physical memory on the server for every address the programs think they can address – so the programs can believe the server to have more memory than it physically does. Virtual memory addresses do not need to be mapped to anything until they are allocated by a program. And a virtual memory address can be mapped not to a physical block of memory for storage, but to a block on a hard drive. Read more »
[Originally appeared April 16, 2014 on the OneLogin blog; guest blog post written by Annie Dunham, Director of Product Management, LogicMonitor.]
IT managers either adopt a DevOps philosophy or think it’s passé. Either way, it’s hard to argue with the foundational principle that IT automation isn’t just a trend but rather a key tenant of today’s IT Ops environment. When done right, automation brings efficiency to IT teams.
At LogicMonitor – OneLogin’s newest integration partner that offers infrastructure performance monitoring via it’s SaaS-based platform — it’s often said that “the best way to get a promotion is to automate your way out of a job.” You may wonder what the reasoning is for this and it’s simple: automating manual tasks is fundamental to the LogicMonitor product philosophy. A team that monitors thousands of data points each day are also testing new data centers, adding equipment vendors, and performing the laundry list of daily IT Ops responsibilities. Efficiency isn’t nice to have — it’s a requirement!
Our integration with OneLogin reduces the time required for user management configuration by over 50%, as the integration completely removes the need to manage users and security policies within LogicMonitor.
As previously noted, LogicMonitor was fortunate that none of its infrastructure or services were vulnerable to the Heartbleed vulnerability. But the fact that many sites with excellent security were affected may lead some to question the wisdom of putting business information in the hands of a SaaS provider, no matter how secure, given that the services will necessarily be provided over the Internet.
I think the fact that SaaS providers that were affected remediated the vulnerability almost immediately (e.g. Stripe, Chargify) argues that SaaS providers are a great choice for such information. The entire business of SaaS companies like LogicMonitor rests on our ability to earn and keep our customers’ trust, month after month. Consequently, SaaS providers have to react quickly to vulnerabilities. Read more »
[Written by Chris Morgan, Senior Solutions Engineer at LogicMonitor]
At LogicMonitor, our monitoring philosophy is to provide customers with actionable intelligence. Great examples of actionable intelligence are the alerts we send you about performance issues in your IT infrastructure. Providing meaningful performance and health metrics is our bread and butter, but we want to avoid overwhelming you with alerts as overload often results in apathy, defeating the original purpose of monitoring.
Consider the case of when a Windows Server running SQL database receives a credential change. Any new client request to that server will then fail, and with every failure a Window Event will trigger. When your server has an issue and 100 different clients are trying to access it unsuccessfully, you’ll see an event, and an alert, for each and every failure. This quickly becomes overwhelming, and you’ll probably turn off EventSource alerting to avoid the alert storm. Your frustration in this case would be understandable – a single Windows Server can be responsible for thousands of event alerts in a very short time period. But turning off event alerting has potentially dire consequences: you can miss crucial events you actually need alerting on, so you’re throwing the baby out with the bath water.
Apparently some LogicMonitor people (and it wasn’t just the guys) decided to strut their geek cred in one of our internal chat rooms this afternoon. The names have been removed to protect the geeky.
See how many of these old school technologies you recall…. (sorry Gen Y – you might have to
Alta Vista Google some of this stuff.)
3:32 PM i like the “browser support” section
3:34 PM OMG IE 5.5!!!!!
3:34 PM i’d laugh if it wasn’t so sad that there still are some people using it
3:35 PM I have Netscape Navigator 4.0 installed on this machine
3:35 PM I have seen AOL IE recently Read more »
While LogicMonitor is great at identifying issues that need attention, sometimes figuring out what exactly the solution is can be a bit harder, especially for network issues. One relatively common case – an alert about failed TCP connections. Recently, one of our servers in the lab triggered this alert:
The host Labutil01 is experiencing an unusual number of failed TCP connections, probably incoming connections. There are now 2.01 per second failed connections, putting the host in a warn level. This started at 2014-02-26 10:54:50 PST. This could be caused by incorrect application backlog parameters, or by incorrect OS TCP listen queue settings.
OK – so what is the next step? Read more »
Performance monitoring for all your infrastructure & applications. In minutes, not hours.
Questions? Call Us!
(888) 415-6442 or +1 (805)-617-3884