The Critical Request SOP that will save your MSP $56,160+ per year
Here’s a statistic to blow your mind: The All-Hands On-Deck approach for Critical Request response is costing a 6-tech MSP operation $56,160 per year. Don’t believe me? Let’s go through the math and how to avoid this Billable Hours waste in your business.
First, our recommendation:
All MSPs have an SOP for Critical Requests, but most are the unwritten All-Hands On-Deck approach. Instead, we highly recommend writing a Critical Request SOP to cut the confusion on what to do when a Critical Request comes in. Cutting the confusion will decrease the Time to Engagement, increase Client Satisfaction, and increase Resource Utilization, as I’m sure confusion is a non-billable event at most MSPs. It’s important to note that Critical Requests are unplanned and unscheduled.
Next, the math:
Studies show that a single distraction/disruption interrupts productivity by 24 minutes. Put differently, an interruption can derail a technician unnecessarily for 24 minutes from the primary task at-hand. And since most MSPs experience a Critical Request 3 times per week, establishing a viable course of action to respond to Critical Requests is essential: 0.4 hours (24 minutes) X 156 times per year X 6 Techs X a Standard Role Rate of $150 = $56,160.
And now the why:
Back in 2011, a year after I was asked to improve my company’s Service Delivery operation from a reactive, break/fix house to a proactive, data-driven one, Critical Requests were causing the most chaos for the Service Delivery team. It was then that I realized if we could break the reactive All-Hands On-Deck approach and reduce the disruption down to one Tech, the cost impact would be reduced to $9,360, a cost savings of less than it costs to have Advanced Global guide an MSP through the Service Delivery Improvement programs.
I then asked myself to which tech should these savings be applied and for how many hours per day. For the answers to these two questions, we turned to the data. A review of all the critical priority tickets completed the previous month revealed they were being assigned to the Project Engineers. Go figure! No wonder they were causing the most amount of chaos. Since Critical Request by ITIL definition has a High Urgency and High Impact, we need one of our best techs available to take the requests. The report also revealed that we need 4 hours per day dedicated to Critical Response remediation.
Note: Our MSP was a large shop with 80 billable resources, and today, the average Total Hours Worked is half of what it was in 2011. The rest of the principles remain true today.
Asking executives if we could block out a Project Engineer for Critical Requests based on data was a no-go. They were very concerned that the 4 hours/day would be wasted, and they’d be foregoing $700 per day or $168,000 per year in opportunity costs. The executives didn’t realize the Service Coordinators controlled the calendars, so we ignored them and pre-positioned one of the Project Engineers for 4 hours each day.
The next step happened organically: Now that we had a pre-positioned Project Engineer standing by, when a Critical Request call came in, the Service Coordinator:
Took the call and opened a ticket, ensuring that all the needed information was added to save valuable engineering research time.
After hanging up from the call, and before completing the Ticket creation process; the pre-positioned Project Engineer was IM’d (we used Cisco Jabber, but I am sure Teams works almost as well 😊)
The pre-positioned Project Engineer was given the basic facts: Critical Request, Client, Issue, Contact Info
By the time the Service Coordinator completed the ticket creation process, the Tech was already engaging and talking with the client
When the 360 Critical Request notification went out, everyone knew who was already engaging
These 5 steps lead to no noise, no disruption, no inefficiencies, and very, very happy clients. The trust and partnership with our clients grew beyond our wildest imagination and turned them into raving fans
As a side note, the cost savings saved my butt. About a month after implementing the new Critical Request SOP, I was called to the principal’s office: my head was on the chopping block. Luckily for me, my supervisor stopped along the way to check last week’s Utilization Report. And sure enough, the Project Engineer that covered most of the pre-positioning schedule was indeed way below the Weekly Billable Hours goal, but the Resource Utilization report also showed that it was the highest Billable Hours week in the history of the company. That little fact saved my job, and about a month later, in a different proactive, data-driven conversation, the executives had my back and said we need to do this going forward.
Today, here’s what a good Critical Request Response SOP looks like:
Note: Before implementing the Critical Request SOP, review a Ticket Completed Last Month report to determine how many hours per day a tech needs to be pre-positioned to meet Critical Request expectations. Because of the nature, urgency, and impact of Critical Requests, the pre-positioned tech is usually one of the most experienced techs in the company.
1. As soon as a critical request comes in, IM the pre-positioned tech assigned to cover critical requests for the day, or half-day, with the following information:
Client’s name
Type of critical issue
Client contact person
2. Create a ticket in Autotask.
3. Assign the ticket to the tech.
4. Save the ticket with a critical priority, which will send out a critical request notification, not only letting the client, account manager, service manager, and others know a critical request has been received, but also who’s working the ticket.
5. Check-in with tech and then the client every 45 minutes, until the issue is no longer critical. Update the other stakeholders as needed.
6. Standby to assist the engaging tech. Examples of the type of help that may be needed are:
Monitoring if any other tickets from that location have come in.
In the case of a server down, someone else may need to be assigned to perform an on-premise or cloud virtualization if there are issues restoring the server.
If it is a utility or ISP outage, getting an ETA and monitoring it. You may want to notify other clients that are using the same affected services. Keep a list of these clients handy, so you can call back with updates.
If it’s a virus, activating your Client Virus SOP, alerting the client to get everyone off the network, and sending a tech to start cleaning up at the location. This must be done onsite. If it’s an equipment issue, finding out if you have one to sell. If you don’t have one in inventory, see if you have a spare to get them by until a new one is acquired.
Contacting a vendor to order warranty parts or on-site service
7. Once the crisis has passed, and the ticket has been completed, downgraded, or replaced with a non-critical ticket:
Inform the sales team that a piece of equipment needs to be ordered ASAP.
Track any parts on order and make a service call to replace any spares with the new part. If this is done after hours, schedule the tech at the location first thing in the morning to check on any issues.
Reach out to the client to see if there any lingering issues needing attention. This can be after a few hours or the next morning.
Schedule an internal postmortem meeting to:
Debrief with the team
Update the critical request SOP with lessons learned
Perform a Root Cause Analysis (RCA) to find out if there was anything that could have been done to prevent the critical issue.
- Steve & Co.
Interested in evaluating your service delivery operations? Click here to take our free MSP Service Delivery Procedural Self-Evaluation to see how you can improve & uncover the 6 keys to profitability.