Having worked in SaaS companies for a long time (going back to when they were called ASPs), I’ve heard a lot of companies not adopt SaaS solutions due to “security concerns”. This attitude has generated a quite a few blog posts recently, so thought I’d add my 2 cents.
The people involved in SaaS think security is often better in SaaS systems that premise based systems.
Justin Pirie at “The Week in SaaS” (an essential blog for those in SaaS, I think), put it this way:
Reuvan Cohen at Elastic Vapor summarizes his view:
So what’s my opinion? Having managed IT operations for a variety of companies, and worked in SaaS companies, I think I can share a realistic view.
There are (simplistically) two aspects to application security – physical security and application level security.
If you are a small company, and have premise based applications, you probably don’t care much about application level security. The company is small, and everyone will be trusted to some degree. The fact that the application is behind a firewall, with no access from outside, does provide fairly good security. The SaaS advantage here is that small companies do not usually have physically secure premise based servers. They are typically in a small server room (or closet), without much in the way of alarms, 24 hour guards, and all the other touted features of datacenters. And if you can physically access a server, you can get to the data on the server. As a friend of mine, the head of sales at a SaaS property management software company, puts it “No Fortune 500 company would consider putting their servers in your SMB server room. Yet they do have them in the same datacenter as our SaaS servers.”
As a company grows, they will typically get on top of physical security, but then application security raises it’s head, as security sensitive applications will now be restricted to a subset of employees. Many premise based applications (especially open source ones or internally developed ones, it seems) are written without any access control designed in. And once a company reaches any size, the premise based application will need to be accessed by people outside the firewall (remote offices, teleworkers, etc). How is that access to be granted securely, without undermining the whole security premise of “Well, it doesn’t matter if it’s not terribly secure, as no one can access it.”
Yes, you can put reverse proxy firewalls or SSL VPNs to provide some sort of remote access, but now the “simple” choice of premise based software for security is getting more and more complicated (and expensive).
So I think the consensus above is correct – in a company of any size, you are more likely to have less security issues and expense with a SaaS solution than premise based software.
(FYI – LogicMonitor has its servers in Equinix datacenters.)
What are your thoughts?
We here at LogicMonitor use our own service to monitor the various parts of our infrastructure, and doing so demonstrates the financial value that LogicMonitor brings.
The more you instrument with LogicMonitor, the more power it has. In the cases below, the information and alerts that LogicMonitor presented allowed us to avoid spending money on more hardware – and with LogicMonitor’s availability requirements, each hardware purchase usually means 3 x the hardware (active/passive at the datacenter in question, and failover hardware present in a different datacenter.)
One case was relatively straightforward – a review of the MySQL performance monitoring metrics revealed that the number of rows read due to read_rnd_next operations was very high – in the tens of thousands per second. (For those of you not DBAs, this is the number of rows MySQL reads sequentially in order to satisfy a read request – an indicator that indexes are not being used.) A quick bit of investigation by our programmers revealed a query written in such a way that MySQL was not using the existing indexes. This was rewritten, and on our release, the MySQL table scans dropped dramatically:
This saved the system’s CPU load, disk load, and improved the response time for users.
However, a more dramatic demonstration came a week or so later, when one cluster started getting disk bound. An increase in customers, combined with some newly released features that added extra load, meant that one cluster was reaching the capacity of its hardware (or so I thought.) Average response time was hitting what we regarded as limits, and my thought was that we’d have to throw hardware (meaning money) at the issue.
However, using custom application metrics that the LogicMonitor system exposes (in our case via JMX monitoring, as our system is written in Java, but the data could have been collected by any of a variety of mechanisms, from perfmon counters, to web page content, to log files), it was apparent that the load was solely due to one particular processing queue. Our CTO investigated the caching algorithm that is applied to the data in this queue, and was able to tune it so that it was much more effective, as can be seen from the graphs below:
This dropped the CPU load of the cluster:
And also improved the servers’ response time:
So while LogicMonitor did not directly solve the problem, the extensive application monitoring did warn us that an issue was arising, and pinpoint where in our system the bottleneck was, and allowed our staff to focus their investigation on the one particular queue, rather than all components of the system. It also allowed us to see the effectiveness of the changes on our staging systems, before we released to production.
LogicMonitor’s application monitoring saved us many thousands of dollars, and many hours of engineering time. Both things in limited supply at any company.
Tags: application monitoring
Performance monitoring for all your infrastructure & applications. In minutes, not hours.
Questions? Call Us!
(888) 415-6442 or +1 (805)-617-3884