Archives Cloud

AWS Monitoring the right way

Posted by & filed under About LogicMonitor, Cloud .

At the recent Amazon re:Invent show, LogicMonitor demonstrated its new AWS integration and monitoring. (We also announced another set of free tools – JMX Command Line Tools – but more on that later.)

“Why”, you may be asking, “is this interesting? Doesn’t Amazon provide monitoring itself via CloudWatch? And in any case, aren’t there many ‘cloud centric’ companies that do this?”

Good questions. Read more »

[Kevin McGibben (CEO), Steve Francis (Founder and Chief Product Officer) and Jeff Behl (Chief Network Architect) contributed to this post.]

This week LM’s Chief Network Architect “Real Deal Jeff Behl” was featured on the DABCC podcast with Doug Brown.  The interview journey covered lots of ground and sparked our interest about IT industry predictions for 2014. There are so many exciting things happening in IT Ops these days it’s hard to name just a few.

Before it’s too late, here’s our turn at early year prognosticating.

1)   2014 is (at long last) the year for public Cloud testing.  The definition of what “Cloud” is depends on whom you ask. To our SaaS Ops veterans, it means a group of machines running off premise for which someone else is responsible for managing.  Given Cloud can mean lots of things — from public Cloud infrastructure (Amazon), Cloud services (Dyn or SumoLogic) to Cloud apps (Google Apps) to SaaS platforms (SalesForce and LogicMonitor!). The shared definition among all things Cloud is simple: it’s off premise (i.e., outside your data center or co-lo) hosted infrastructure, applications or services. For most enterprises currently , Cloud usually represents a public data center, offering from the very generic VM compute resources to specific services such as high performance NoSQL databases and Hadoop clusters.  Enterprises are starting to gear up to test how the public Cloud fits in its data center strategy. In the past month alone, several of our Fortune 1000 clients confirmed they’ve set aside 2014 budget and IT team resources to test public cloud deployments.

Read more »


We’ve updated our Rightscale Rightscripts (with the addition of the execute input), along with our documentation of those scripts.  We’re taking this occasion to republish this blog on using our Rightscale integration, which somehow got lost during our migration to our new web site last June.

How to ensure all your Rightscale managed instances are monitored in LogicMonitor.

Step 1: Import LogicMonitor RightScripts.

To use LogicMonitor’s RightScripts, go to the RightScripts tab in the RightScale Marketplace.
(Click images to enlarge)

Read more »

LogicMonitor is at VMWorld 2013

Posted by & filed under Cloud, Virtualization .

VMWorld 2013!  Time to get your virtualization on!  Last year was LogicMonitor’s first time at the conference.  In the past year we’ve grown a ton, have more product to show off, and look forward to attending the show again.  For those of you who have not been to VMWorld, it is a grand event.  It is held in the Moscone Center in downtown San Francisco, and the vendor exhibitor room is huge!  You could probably put 2 football fields inside.

This year we will be located at booth number 2412 (in the same aisle as Rackspace). Read more »

LogicMonitor is known for robust IT infrastructure and application monitoring, but did you know that cloud providers use LogicMonitor to determine customer requirements and to evaluate new business potential of customer prospects? Sometimes we come across a client that is taking great advantage of our powerful data collection abilities to discover new systems insights. We figured our cloud providers clients would appreciate our sharing!


Zumasys is a Southern California-based cloud services provider that helps companies move their technology infrastructure and applications to the cloud. For three years, Zumasys has used LogicMonitor to monitor its core cloud infrastructure, while adding LogicMonitor to its cloud product. As part of its cloud services offering, Zumasys gives customers role-based access to LogicMonitor in order to view the performance of its proportioned cloud services.

zumasys-graphicBefore Zumasys brings a new hosting client on board, it installs a LogicMonitor collector in the client’s network, tracking bandwidth usage, data quality, storage and computing resource utilization along with any critical operating system alerts that LogicMonitor can find. Zumasys then uses the data collected to determine the resources it will require to move the hosting client onto its cloud platform.


“We are more accurate with our estimates and better prepared to migrate customers to our cloud, says Paul Giobbi, President of Zumasys. “With close to 70% growth of our hosting platform in 2013, LogicMonitor’s ability to help size potential customer resources has been critical to our success.”

LogicMonitor is a playground of data just waiting to be used. If you want more ideas for how to use our services to get data insights contact one of our support engineers, we’ll be happy to chat.

Check out Zumasys testimonial interview.


‘Meraki’ may not be the best known name in networking, but their technology is going to touch you soon if it hasn’t already.  Meraki was just acquired by Cisco in November for a cool $1.2 billion to incorporate into their new Cloud Networking Group.

Cisco is predicting explosive growth in cloud computing, the practice of running applications and storing data on remote servers accessed over the internet instead of running apps and storing data on your local computer. And increasingly, these cloud services will be accessed with with mobile devices over wireless networks.

What Meraki brings to the table is their cloud managed wireless network infrastructure hardware.  The Access Point (AP) is the critical bridge from the wired to the wireless world. The unique feature of the Meraki APs is you plug them into your wired network, the AP connects to  the mother ship at Meraki, and you go to meraki.com to configure and manage them via a web UI.

This is a stellar leap from the typically clumsy and slow embedded web interfaces found on most APs, and the emphasis is on managing your wireless network as a whole, not a bunch of individual APs. The web UI is clean and easy to use, the network can be managed from anywhere, and the APs are kept up to date by Meraki with automatic firmware and security updates.

Read more »


Is an RMM tool the only thing MSPs need for monitoring?

Posted by & filed under Cloud, MSP, RMM .

If you’re an MSP providing IT services for desktop environments where basic up/down server monitoring will do, an RMM tool is more than adequate. But for MSPs offering fixed-fee monthly service packages, or cloud services, important advantages can be gained by complementing an RMM tool with an advanced performance monitoring solution.

Monitoring customers on fixed-fee contracts
When you offer an “all you can eat” package where you are on the hook for anything that breaks in your customer’s infrastructure, the ability to be proactive and fix small problems before they become big problems becomes paramount.

An advanced performance monitoring solution will monitor your customer’s high-end systems (network, servers, virtualization, storage, etc…) in such depth that you will know about problems well before your customer. And when there is an issue, these tools provide historical trending graphs that enable your front line help desk technicians to solve many problems without having to bring in the help of more costly engineers.

Read more »


By Ethan Culler-Mayeno, Integration Engineer

 “A cloud is made of billows upon billows upon billows that look like clouds. As you come closer to a cloud you don’t get something smooth, but irregularities at a smaller scale.”

 ~Benoit Mandelbrot

The cloud, as seen by the end user, is a wondrous tool full of seamless functionality and performance limited only by their internet connection. The truth is, the “water particles” which make up these clouds are machines.  And machines fail. Through the use of cloud providers like Amazon’s EC2, Rackspace and others, we get to add a layer of abstraction between the machines and ourselves and share in the wonder of end users.

There’s a catch: Adding layers of abstraction creates complexity, and complexity increases the potential for problems. In addition, while you no longer need to worry about the state of the physical machine, if your cloud instance runs out of CPU, memory, or disk space, your application will take a hit. So, whether shipping hand-built servers to data centers across the globe or spinning up new machines from a cloud provider, the need for management and monitoring is paramount.  But fear not! Now, thanks to LogicMonitor’s hosted, full stack, datacenter monitoring being integrated with RightScale’s cloud computing management, you can have your RightScale managed hosts automatically added into your LogicMonitor portal! The next time a huge surge of traffic forces you to spin up a few hosts, monitoring them is taken care of.

Between the cloud management services provided by RightScale and the full stack, SaaS-based data center monitoring provided by LogicMonitor, you can know exactly what’s happening with your devices, both physical and… nebulous.

How to Use LogicMonitor RightScripts in RightScale



Kablooee!  That was the sound I (and many others) heard coming from one of Amazon Web Services (aka, the “cloud”) availability zones in Northern Virginia on June 30th (http://venturebeat.com/2012/06/29/amazon-outage-netflix-instagram-pinterest/, http://gigaom.com/cloud/some-of-amazon-web-services-are-down-again/).  The sound was a weather-driven event causing one of Amazon’s data centers to lose power.  And what happens when a data center loses power (and, for unspecified reasons, UPSs and generators don’t kick in)?  Crickets.  Computers turn off.  Lights stop blinking.  The “sounds of silence” (but not how Simon and Garfunkel sing about it).

By this point, you either have your monitoring outside your datacenter, and were notified about the outage, or only became aware belatedly, and regretted the decision not to put monitoring outside. But what happens after power has been restored?  Well, that’s when good monitoring comes into play yet again…

As much hype as there has been surrounding “clouds” and “cloud computing” (and for good reason – they are changing the face of infrastructure), “clouds” are still a bunch of computers sitting in some data center – somewhere – requiring power, cooling, etc.

One of the nice things about going with a cloud service for your infrastructure is you are largely removed from needing to monitor hardware – this is all (presumably) done for you.  No having to worry about fan speeds, system board temperatures, power supplies, RAID status, etc.  However,  this doesn’t alleviate the need for good and intricate monitoring of your application “stack”.  This is everything else that makes your applications go — databases, JVM statistics, Apache status, system CPU, disk IO performance, system memory, application response time, load balancer health, etc etc.  This is the real guts of your organization – and the things that you need to know are working after a reboot.  And whether you are in the cloud or not, at some point all your systems are going to be rebooted.  I guarantee it, so plan for it.

So what happens when your environment does reboot?  It doesn’t matter whether you are in the cloud or not, when power is restored you need to make sure all the components of your software stack are back up.  Across all of your systems.  Hopefully your disaster recovery plan does not revolve around a single “hero” sysadmin who merely needs to be pulled away from an IRC chat, a MW3 campaign, or the bar (of the three, the last is the most worrisome).  Any available admin should be able to identify, via your monitoring system, what components of the stack came back up and are functioning, and which are not.  Your monitoring dashboard, listing all machines and services, is your eyes and ears – without it you are blind and dumb (so to speak.)  When all alerts have cleared from monitoring, you should be comfortable in knowing that service has been completely restored.  Good monitoring is by far the greatest safeguard you can have in making sure all systems are functioning again after a reboot, and in the shortest amount of time.

The take-home:  deploy good monitoring.  Make sure all aspects of your stack are monitored.  All of them.  When all of your machines are rebooted (at 3AM in the morning), how do you know all aspects of your stack are back up and functioning?  Good monitoring.  Good monitoring = LogicMonitor.  Check us out.  We eat our own dog food (see the next article on the “Leap Second” bug to get an account of this), and we are SaaS service, meaning if all your systems do reboot, your monitoring system is not a part of it.  We can help you recover faster from any outage, guaranteed.


It’s 6 AM.  Bob, an entry-level IT engineer walks into a cold, dark, lonely building – flips on the lights, fires up the coffee pot, and boots up.  Depending on what he’s about to see on his computer screen, he knows the fate of the free world could rest in his soft, trembling, sun-starved hands.

Well maybe not the free world, but at least the near-term fate of his company, his company’s clients, and possibly his next paycheck.  Bob is the newest engineer for a busy MSP, whose promise to its clients is very simple: your technology will always be up and working!

Fortunately for Bob, his MSP has a great ticketing system, so as soon as his coffee is hot and hard drive warm, he’ll login to his ticketing dashboard, right?  Wrong!  What?!  Bob!  What are you logging into?!  Oh.  Your monitoring application?  Really?

Disclaimer: This is not Bob!

Really.  True story.  Dramatized for effect, name changed to protect the reasonably innocent, but true story.  Eric Egolf, the owner of CIO Solutions, a thriving MSP told us about it just last week.  “The first thing the new guy does, intuitively, is open up the monitoring portal, before he ever looks at our ticketing system.”  And the other engineers are following suit.  Egolf says the ticketing system is great, but their comprehensive monitoring solution reveals the actual, real-time IT landscape for their entire client-base within seconds. And the most critical problems practically jump off the screen at the engineers, sometimes before a ticket has even been created.

Set an easy to use interface on top of the comprehensive monitoring solution, and Bob can often times very quickly isolate the problem, ferret out the root cause, and resolve the issue … before the asteroid plummets to earth and destroys America … or at least before a client calls screaming as if that did just happen.

LogicMonitor makes my engineers smarter,” claims Egolf, “an entry-level engineer can basically perform all the functions of a mid-level engineer.”  And without the increase in pay grade.  That keeps costs down and clients up, and while that’s particularly a sweet-spot for MSP’s and cloud providers, the same formula holds true for SaaS/Web companies and in-house IT departments.   Not good, but great monitoring is the answer.

That’s how you make an engineer smarter.  Next blog post: How to Make an Engineer the Life of the Party:)


  • Page 1 of 2
  • 1
  • 2
  • >
Popular Posts
Subscribe to our blog.