If you’re a LogicMonitor customer, you already know that our platform can automate the setup of monitoring for hundreds of common devices with pre-configured datasources. But what if the device you are trying to monitor is not already in our core repository? If it supports SNMP (or one of a myriad of other protocols – but we’ll only cover SNMP here), you can build a custom datasource.
Before you begin, though, you should know that there isn’t a “one size fits all” approach when it comes to creating an SNMP datasource. The development process involves too many variables to ever take 100% of the guess work out of SNMP Datasource development.
That being said, the following steps summarize the overall process for creating an SNMP Datasource:
Sometimes, the hardest things are not technical. They’re a result of the politics of working in large organizations, with different groups with different responsibilities. Sometimes this fragmentation of responsibilities allows 82 year old grandmothers to walk up to the walls of supposedly ultra-secure nuclear weapons facilities. And the lessons we learn from this case can apply just as much to your I.T. monitoring.
We are very excited to announce that this month we kicked off our LogicMonitor 201 advanced product training series. This series is for customers who are already familiar with LogicMonitor basics and provides an in-depth training webinar on a different advanced LogicMonitor topic each month. The first topic was alerts, and more specifically, how to optimize your current LogicMonitor alerting implementation.
A poll at the beginning of the training revealed that most attendees were receiving too many LogicMonitor alerts. This is a common problem with customers that are new to LogicMonitor or that do not have their LogicMonitor alerting implementation optimized. Remember that we provide datasources (recipes (or templates) that define which data is collected, how it is displayed, and when alerts will be generated) with pre set thresholds that are based on industry best practices. These thresholds work well for most critical production environments, so once you add a device into monitoring for a particular data source you’ll start receiving alert notifications right out of the box. But settings might not be appropriate for a stage or development environment, for example. This makes LogicMonitor alerting more of an ‘opt-out’ solution – you should disable or tune the alerts that do not make sense for your particular environment.
Here at LogicMonitor we have many different types of service providers as customers and many of which have given us great insight into the business and have taught us how they successfully sell monitoring. Below are some specific techniques that service providers can utilize to monetize monitoring through out the entire customer life cycle.
1) Look like an expert on prospects infrastructure before they sign up as a customer.
If you’re an avid reader of our release notes and press releases (if not, you should check them out), you already know that we just released a big upgrade to our Network Traffic Flow Analysis (formerly known as Netflow) with a beta release of the new LogicMonitor UI.
What you might not know is how the new Network Traffic Flow can help you to determine exactly where your network traffic comes from with the added ability to do things like: Read more »
We are pleased to announce LogicMonitor’s Second Annual EU Roadshow held on February 25, 2015 in London. LogicMonitor’s marketing, product, and engineering teams have put together an event that promises to be unique and informative.
The Roadshow agenda includes a roadmap presentation from LogicMonitor’s Founder and Chief Product Officer, Steve Francis, LogicMonitor’s State of the Union talk from Kevin McGibben, CEO, and an overview of product releases and a Q&A with LogicMonitor engineers.
LogicMonitor customers and prospects are highly encouraged to attend the event to enhance their performance monitoring skill set and become a better user of the platform.
LogicMonitor will be in London and San Francisco in Q1, and would love for you to vote on where we should go next. Vote here for the next roadshow city!
Interested in attending the EU Roadshow? Email Krista Damico at firstname.lastname@example.org for more info.
No one likes to talk about outages. They’re horrible to experience as an employee and they take a heavy toll in customer confidence and future revenue. But they do happen. Even publicly traded tech powerhouses, such as eBay and Microsoft, who have more technical resources than you’ll ever have, fall prey to outages. And when they do, they are closed for business, much to the chagrin of their shareholders and executive teams.
It’s not so much a question of whether an outage will occur in your company but when. The secret to surviving them is to get better at handling them and learning from the mistakes of others. Nobody is perfect all the time (my current employer, LogicMonitor, included) but I hope by talking about these mistakes, we can all begin the hard work required to avoid them in the future.
An outage occurs. A barrage of emails is fired to the Tech Ops team from Customer Support. Executives begin demanding updates every five minutes. Tech team members all run to their separate monitoring tools to see what data they can dredge up, often only seeing a part of the problem. Mass confusion ensues as groups point their fingers at each other and Sys Admins are unsure whether to respond to the text from their boss demanding an update or to continue to troubleshoot and apply a possible fix. Marketing (“We’re getting trashed on social media! We need to send a mass email and do a blog post telling people what is happening!”) and Legal (“Don’t admit liability!”) jump in to help craft a public-facing response. Cats begin mating with dogs and the world explodes.
Read more »
In recent years, Solid-State Drives or SSDs have become a standard part of data center architecture. They handle more simultaneous read/write operations than traditional disks and use a fraction of the power. Of course, as a leading infrastructure, software and server monitoring platform vendor, we are very interested in monitoring our SSDs, not only because we want to make sure we’re getting what we paid for, but because we would also like to avoid a disk failure on a production machine at 3:00AM in the morning…and the Shaquille O’Neal sized headache to follow. But how do we know for sure if our SSDs are performing the way we want them to? Being one of the newest members of our technical operations team, it came as no surprise that I was tasked to answer this question. Read more »
In a prior blog post, I talked about what virtual memory is, the difference between swapping and paging, and why it matters. (TL;DR: swapping is moving an entire process out to disk; paging is moving just specific pages out to disk, not an entire process. Running programs that require more memory than the system has will mean pages (or processes) are moved to/from disk and memory in order to get enough physical memory to run – and system performance will suck.)
Now I’ll talk about how to monitor virtual memory, on Linux (where it’s easy) and, next time, on Solaris (where most people and systems do it incorrectly.) Read more »
Before the July 4th holiday, we had the opportunity to host our second LogicMonitor Monitoring Roundtable.
During this session, Mike Aracic, a senior datasource developer here at LogicMonitor, gave us insight into creating datasources for your environment and provided some resources for further education. Read more »
Performance monitoring for all your infrastructure & applications. In minutes, not hours.
Questions? Call Us!
(888) 415-6442 or +1 (805)-617-3884