Sponsored Content

The hidden “essential worker” right now is you

By Rachel Obstler, VP Product Management at PagerDuty

By now, you’re familiar with the term “essential worker.” And while none of us can compare what we do to the life-saving work of nurses, doctors and other public servants on the front lines of this pandemic, if you’re in IT, you are one of today’s essential workers. 

While the planet is in the throes of a global emergency and the world economy is shaky, somebody has to keep the internet on. That’s not an easy job by any stretch. Before the pandemic hit, 62% of IT professionals in North America reported spending more than 100 hours each year on disruptive, unplanned work. Now, that’s coupled with the impact of the biggest crisis any of us in IT will (hopefully) ever face.   

Luckily, IT and development teams aren’t new to handling a crisis. Outages, downtime, slowdowns, incidents — there’s a whole vernacular dedicated to describing moments when technology fails and the pressure is on. And when that pressure is on you to fix the problem, suddenly your skills become more essential than ever. 

What we’re experiencing now with COVID-19 is an unprecedented amount of pressure. Across our customer base at PagerDuty, companies are experiencing, on average, twice the number of technical incidents than normal. The verticals hit hardest — online learning, workplace collaboration and retail — are seeing up to 11x as many incidents

Companies are having to scale infrastructure, add new capabilities and manage layers of unfamiliar complexity. Healthcare organizations, for example, are scrambling to set up efficient crisis lines to manage patient needs from a distance, and establish telephone support. On top of that, they need adaptable technology like collaboration platforms and reliable WiFi to continue operating while some employees are forced to work from home. And this is all in addition to the work they’re doing to actually care for people who are sick. It’s critical that their systems work so they can stay focused on the problem at hand.

For retailers, many of the operations that took place at brick-and-mortar locations have moved online. Overnight, they’ve had to optimize their mobile and web order experience (or build one from scratch), ensure supply chain processes aren’t interrupted and provide employees with access to the right systems and communication channels — and do all of that without impacting the customer experience.

The teams behind the scenes keeping apps and services up and running are the hidden “essential workers.” They’re the unsung heroes keeping the lights on for organizations around the world while adapting to change and addressing their own set of challenges, like shifting an entire network operations center online in a matter of days. 

In addition to their demand doubling overnight, they’re challenged to develop new processes to account for remote workers, adjust for new IT services and infrastructure needs on-the-fly, and in many cases, spin up their own crisis response teams. Yet somehow, in the face of all this added complexity and stress, there are many companies who are doing amazing things. Some are even resolving incidents faster than before the pandemic hit. 

What this shows is that these companies are in “hyper care” mode, understandably. They’re doubling down on preparedness measures and incident response, they’re allocating the right resources and they’re working around the clock to keep the lights on.

We looked at how else these organizations are succeeding in such a challenging time and came away with some insights so others can follow in their footsteps. Combining those observations with learnings from PagerDuty’s 12,000 global customers, we’ve come up with five best practices for hidden essential workers. Here’s how to manage a digital crisis in the face of adversity:   

1. Be prepared

Get familiar with the services you are responsible for. This could include knowing the last time a service went down and why, where to go if you have questions, and what business applications it impacts. If you are doing an on-call handoff, ensure you share with the next person on call what problems happened and whether you think they were solved or only mitigated.

2. Know your commander

The first thing people want to know during a crisis is who the decision maker is. That takes the pressure off the team that is working to determine cause and potentially fix the problem. This person is the one responsible for asking the tough questions like, “what is the risk to the fix you have proposed?” or “what is the current customer impact?” Establishing an incident commander as the decision-maker during a period of uncertainty provides clarity. They become the highest-ranking individual on the major incident call, regardless of their day-to-day rank. 

3. You can’t fix what you can’t see

Ensure you have the processes and tools in place to remotely manage IT performance, spot incidents early and take action. One of our product team members said the other day, “You can’t fix what you can’t see” and that is spot on. Making sure you have the right telemetry in place going to the right person to evaluate the problem is key.  The last thing you want is to find out about a problem when a customer complains — by then it’s already too late. Ideally, you learn about potential problems and fix them before customers even notice.

4. Learn to love a post-mortem

Run blameless post-mortems with the intent to determine proactive changes so you can prevent problems in the future. That will help establish a culture of continuous improvement. Without a post-mortem, you and your team miss out on the opportunity to learn what you’re doing right, where you can improve, and most importantly, how to avoid making the same mistakes again. A well-designed, blameless post-mortem will help your team improve their infrastructure and incident response process.

5. Practice your best practices

Just as firefighters have facilities where they run drills before going into a live situation, developers and IT teams need to be prepared to tackle the unexpected during a crisis scenario. For example, we run a process called “Failure Fridays” — we pick an area of our product to take down so we can test its resilience. This allows teams to practice the incident response process in a controlled environment. We’ve also employed chaos engineering where automated programs insert chaos into a system, like restarting a service.

This is such a difficult time, yet it presents many opportunities to try new things and find ways to become more efficient and to scale. On that note, we’ve tapped into our wealth of knowledge internally and our vast customer base to put together some tips on incident training and recommendations for virtualizing your NOC. Check them out here.