Change is an inevitable byproduct of progress. You cannot move forward without significant alteration of the status quo. If an entity remains in a state of unmoving limbo it becomes stale and will ultimately wither and die on the branch. It is therefore imperative that for any industry or technology to remain in a living, breathing, and viable state it must change.
This philosophy is no more prevalent than in the Managed Service Provider (MSP) technology. As a relevant participant in this industry for over a decade, MHD is well aware of the challenges that need to be met and acknowledged on a daily basis in order to serve our clientele. Especially in the thriving Tampa Bay area market, there is an almost frenzied need to remain on the bleeding edge of this technology. In order to do so, one must remember the roots of the industry and how it has evolved.
The Evolution of the MSP
The In-House IT Technician – Workloads and Salaries
Years ago, it was a common practice to employ in house technicians to maintain the technologies necessary to keep a business competitive. The technicians wore many hats and as such were an integral part of the day to day challenges. A technician maintained the network, made sure the servers remained online, was responsible for the safekeeping of the data, answered all problems with the workstations, and was the go-to guy for virtually all matters involving network, computer, and data technologies.
Along with this responsibility usually came a substantial salary, which many smaller businesses could not afford. An onsite technician could easily breach the six-figure range and much more when the benefit package was added in. A competent technician could easily ask $150,000.00 per year and would receive it. In addition, the company was now bound to this technician as they were embedded in their business and could not afford to lose them as the technician knew their business intimately.
The Gap Between In-House IT and an MSP
So, a void was created where a business needed the expertise of a technician but did not have the wallet to afford one. Many times, an employee of the company would step in and help out, but this normally led to disaster as their knowledge was somewhat limited and would cause more harm than good. With this in mind, what was the SMB to do in order to maintain their business in a timely and prudent manner at a very low cost?
The Rise of MSPs
In steps the Managed Service Provider. It was a stroke of genius really. Install little programs on every computer and have the software to monitor the system. Let the software check the hardware for any anomalies and if one occurred an alert was sent to a central processing area where trained technicians would view the alert and make a determination as to the severity of the situation and act accordingly.
MHD has a very large cadre of these technicians that are experts in troubleshooting and maintaining their client’s businesses.
It was quickly discovered that with the right technicians and a cutting-edge system, clients could have their businesses monitored for mere pennies on the dollar when contrasted with maintaining an on-site technician. So, the average SMB could now enjoy the same powerful maintenance that a large company could without the expense of a $150,000.00 to $200,000.00 per employee.
MSP Challenges and Solutions
This synergy lasted for many years, but as time went on and the complexities of maintaining the technology of a business grew, it became obvious that the Remote Monitoring and Management (RMM) software alone was not enough to continue in a practical manner. The software had to become smarter, faster, and be able to resolve some of the problems on its own without the intervention of a technician. Many of the day to day alerts experienced by a Managed Service Provider Client can be resolved and closed without the technician, or the client being part of the process. In addition to this Artificial Intelligence technology, there was another step in the technology that had to be taken.
A good RMM has the capability of monitoring many checks within a system. Hard drive space as well as activity, CPU usage, Process usage, File I/O, Memory handling and several hundred other checks too numerous to mention. These checks are looked at on an individual basis and alerts are based on a set of thresholds. If a check breaches a threshold then an alert is sent. This sounds simple enough, but sometimes this approach can cause alerts that are not critical enough to notify a technician.
As an example, disk usage should not exceed 15 to 20 percent. That is an industry standard that is used when setting a check threshold. If disk usage exceeds that then a flag is set and a counter in incremented. If the counter exceeds a threshold then an alert is sent and is continually sent until a technician resolves the problem. On the surface this seems like a prudent manner in which to handle this type of disk activity, however, a problem can easily arise.
Backups can cause a flurry of disk usage. Microsoft can send updates which need to be installed which will also cause disk usage to rise. Installing software updates for many popular applications can cause a spike, as well as the update to the software, may have changed the software implementation to where it uses more temporary files and when running causes a spike in disk usage. These are just a few of the outside influences that can cause a check to artificially trigger when none should have been sent.
These types of occurrences are becoming increasingly frequent as the operating systems, applications, and end users put additional pressure on the hardware. It has become painfully clear that simple threshold checks are no longer as accurate as they should be and at times can be a hindrance to a business.
Technology must be developed that looks at the entire picture of a particular business. The RMM, with all of its checks and balances, must now take a look at historical data to determine if an alert should be sent. Collecting data over a long period of time can give indications as to the historical nature of disk usage and can predict when an anomaly may occur and ignore it because it is a normal occurrence for this particular business at this particular time. Using historical data can not only stanch the flow of unwarranted alerts but can be used to predict failures.
SMART Technology
Years ago, the hard drive industry initiated a technology termed S.M.A.R.T, Self Monitoring And Reporting Technology. This system was based on thresholds and would send alerts when certain thresholds were breached. The technology worked well for a while, but it was found that historical data was necessary to accurately predict drive failure. With that as a prompt, the hard drive industry began to save historical data and used that to make failure predictions in conjunction with threshold data. Over the years this use of historical data in conjunction with threshold data has become very sophisticated and very accurate.
In order for the Managed Service Provider industry to take the next step, a S.M.A.R.T technology must be developed. Using historical data as well as threshold flags a finely tuned AI can be developed to not only impede the flow of unwarranted alerts, but to also, predict failures, bottlenecks, and upgrades to a system that may not be apparent to the SMB, or the MSP.
MHD is a Managed Service Provider who has always been on the bleeding edge of technologies that will benefit our clientele.
Predictive Historical Analysis is a discipline that we are well aware of and are taking steps to initiate. We also know that although software, data, and analysis may become more complex, it is the human touch that makes MHD Communications grow.
Please call if your researching the benefits of a Tampa managed it services company.
Call us today for a consultation: 833-MHD-INFO (833-640-2162)
MHD is your premier IT partner serving businesses and organizations throughout Tampa, FL, Palm Beach, FL, and surrounding communities.