Radio engineers and technologists worry about whether their networks are adequately prepared to defend against cybersecurity incursions. We talked to Chris Tarr, CSRE, AMD, DRB, CBNE, director of technical operations for Wisconsin at Entercom, which was one of several radio groups in the United States that have suffered recent ransomware attacks. Opinions are his own and not necessarily those of Entercom.
Radio World: How well prepared is the radio industry?
Chris Tarr: There’s still a mentality that you can protect yourself and make yourself completely invulnerable. It’s never a matter of whether it’s going to happen to you; it’s a matter of when.
Do what you can to fortify your systems, [but] you can put up the best fortress in the world, and once they’re behind that wall, everything is fair game. A lot of companies do a good job of preventing people from getting in from the outside but not doing anything about people who actually get inside. The theme that I’ve seen [in other attacks] is nobody had a plan. Always assume someone is going to get in.
Always assume someone is going to get in.
Everybody says, “Oh well we have backup so we’re okay.” A lot of people who have backups never check them. They never validate them, they never make sure they’re working; and they don’t realize how long it takes to restore that stuff. A lot of people get by with, “We’ve got antivirus, we’ve take backups of everything. We’ve got a firewall, we’re good. Worst case is we just restore from our backups.”
If backups are part of your plan, do you have a plan to check those every day, every two days? What’s your plan for how many times a week you backup? Do you backup [only] certain files? Even if you’re on the cloud, are you able to roll back if something gets attacked?
We haven’t even gotten to the network part yet.
RW: Once a manager knows they need a plan, what’s the next step?
Tarr: A plan is only as good as how you execute it. So what are the vital components of your operation? Once you’ve identified those, what happens if those were all to fail? How would you restore those? Even better, is there a way to really harden the network?
For example, by now everybody should be segregating their automation networks from their office networks. However, you can’t do that 100%, you have to be realistic. Short of sneaker netting, and using thumb drives and stuff that can kill the infection, how can you get files from Point A to Point B?
Something as simple as “How many file shares do you have, and how many do you really need?” Does everybody need to have access to everything? Really take a serious look at the roles of each individual in the organization. What do they truly need access to?
Then how can we isolate things? We know that an automation system isn’t going to get the ransomware on its own, so look at what kinds of actions people could take to infect the network.
What if the program directors want access to the computers on that automation network? In the old days we’d just throw on another network card, put their computer on there and they’re good to go. You can’t do that anymore. So you look at maybe a thin client on your desktop, where there really isn’t any services other than sharing a video feed between the two machines.
Where do you keep your financials? Where do you keep HR stuff? How do you segregate that? Again, most of that is going to have to live on a network somewhere; what do you do to keep those files safe?
That’s step number one, getting things locked down, network segregated, backup plans. You can’t do just a single backup and hope for the best; you need to rotate backups, take backups offline so they can’t ever touch the network. That’s saved me more than once, where my backup was a disconnected drive so it never got touched. I was able to restore cleanly without any problems.
You want to validate those backups to make sure. There’s nothing worse than putting a backup up, and realizing that it hasn’t run for three months because of not paying attention.
How will you communicate [after an attack]? Most of the time your email is going to be down, everybody’s computers are going to be off; so how do you communicate what’s going on? How do you communicate to your advertisers so that everything is okay? Because the word will get out. How do you put logs together?
Assume that everything involved in your operation is off and there’s nothing you can do right now about that. How do you manage that? Do you set up a Twitter account for employees? Do you prepare a list of their personal email accounts so that you can do a mass email with “Here’s what’s going on” status updates on what’s functioning and what’s not functioning?
Have you thought about how you’re going to play back audio if your playback machines, heaven forbid, get hit? How are you going to bill clients? Spots, how are you going to bill them? A file server backup could take hours to days to restore. What do you do?
That’s the holistic approach people are missing.
RW: I do have the sense that more organizations are trying to raise awareness on this.
Tarr: Unfortunately, there hasn’t really been a lot of discussion, because companies are afraid to talk about it. Companies that have gotten hit are afraid to talk about it; they don’t want to talk about where they went wrong for fear of somebody thinking that they’re weak or incompetent.
That public station [KQED] that got hit a year or two ago, they really were upfront about the challenges that they ran into. But nobody has really taken the time to talk about from a broadcasting point of view, what the best practices should be.
When this happens to you, be clear and say, “Yes, it’s a very common thing. Yes, we got hit by ransomware and everything’s okay, data is secure, we have a plan and we’re implementing it.”
Getting hit with ransomware is not unusual. It’s nothing to be ashamed of. The success stories are in how you limited how you were affected, and how quickly and efficiently you restored your systems. How quickly did you get your traffic system back up? We were scheduling spots on Monday following the weekend. That’s where planning comes in. There will be things that’ll be out of order for a week or two because they’re low priority. [But] how quickly can you get your critical functions back up?
There’s not enough dialogue about that in broadcasting.
RW: Ransomware catches people’s attention. Should a broadcaster ever pay a ransom?
Tarr: No, you shouldn’t. There’s no guarantee that it’s going to work.
There’s a school of thought that the person who wrote ransomware is going to unlock it if you pay them, because it’s their — for lack of a better word — reputation; but you just don’t know. And payment encourages them; there’s a potential to make yourself a bigger target.
That’s why it’s so important to focus not only on prevention but on response. If you can respond properly and you have a good plan, you don’t need to pay the ransom, you will have all of the things you need to rebuild.
The only thing that I would completely harden in this environment would be your automation system, your bread and butter. That’s easy enough to do because you don’t have to have that on a public network
The problem that exists these days is that security is inconvenient. Unfortunately a lot of the people on the other side of the building, the creative people, the sales people — they understand inconvenience, they don’t understand security. When you say, “No you can’t move those files around, and no, you can’t connect to that automation system,” it’s inconvenient, and they put up a fight. There has to be education there.
As long as your automation network is segregated physically, you can at least stay on the air. That is the number one. You can always hand-write logs, you can hand-write billing, but if you’re not on the air, you’ve got a problem.
As long as your automation network is segregated physically, you can at least stay on the air.
That’s how you have to approach it: Different levels of importance. Being on the air is most important. Second, would be billing, how do we get the billings on, how do we reconcile? Then everything else. It’s trivial to backup office computers and restore them. Nine times out of 10 there is nothing so critical on them that if you’ve got a three-day-old backup, it’s not the end of the world. We had computers that were offline for a week or two. When we got to it, we got to it.
RW: You mentioned automation but there are other vulnerable mission-critical systems, right? Remote control transmitter, interfaces, EAS.
Tarr: Those need to be firewalled and password protected. They’re not going to get affected by ransomware, but you need to be smart about them. What a lot of people are turning to now are firewalls and virtual LANs so these devices are kept onto a separate subnet and you’re only opening ports that are necessary to access them, changing default passwords.
I used to be able to love to get to my stuff from the outside world when I’m driving around or wherever I’m at, being able to log in and do stuff. Those days are over because if I can do it, anybody can do it. So now we’re doing VPNs and virtual LANs to separate them from the office networks.
A lot of the stories you see, EAS boxes getting hacked and those Barix boxes getting hacked, was just because of sloppiness. We have a lot of engineers who are not IT guys, and a lot of IT guys who aren’t engineers. It’s one of the bigger problems in our industry, speaking of firewalls: We’ve built this firewall between IT and engineering. They don’t understand each other’s goals.
I’m lucky that I have a background in both, but in a lot of places, either engineering runs everything and you’ve got firewalls open and ports open, or IT is in charge and the engineer can’t do anything.
A lot of IT guys don’t understand broadcast stuff. For example, PSD or RDS data. Before point-to-point firewalls were common, you’d have a DSL connection at a translator site or whatever, you’d send that data over the public internet. More than once I’ve had an IT guy say, “Oh, you can’t do that.” Actually you can and you have to. And they just don’t understand that.
Make sure that engineers and IT guys understand each other’s goals. The IT guy has to know that there are going to be some things that have to get done that may require special consideration on the engineering side. Engineers are going to have to understand you can’t throw a bunch of ports open so you can get to your Burk. VPN or something else is going to be required.
A lot of security rides on that relationship.
RW: But when the head of a radio group reads a headline about a competitor being hit by a ransomware attack, calls the engineer in and says “How do we make sure this never happens to us,” the answer can’t just be, “We need a better relationship between engineering and IT,” right? What does the engineer say to the CEO?
Tarr: You need to get stakeholders together and come up with a plan. It’s hard to act as one cohesive unit if you’ve got these varied departments with their own priorities. You can’t even set up a security plan if you’ve got the program directors insisting they must have access to this or that.
Let’s be honest, in a broadcast situation, a lot of times the engineers don’t carry a lot of power, so they can’t tell the program directors, “No, you can’t do that.” You need to get buy-in from the CEO all the way down to the part-time guys. Get everybody together in a room and say, “First of all, let’s talk about how this could happen,” or bring in a security consultant to talk about those things. The second part is to know that if this happens, we know what we’re going to do, we know what the expectations are. To be able to say, “Yeah, we if we get hit, we’ll be down for a day, but here’s what we’re going to do and here’s the steps that we’re taking to make sure that that plan can be executed.” That makes you more confident: “We could get hit and you know what? We’ll be okay.”
[Ebook: Cybersecurity and Studio Disaster Recovery]
RW: People reading this will be well aware of the attack on Entercom. Knowing you can’t talk about every aspect, what can you share about what the company did or learned?
Tarr: Well, unfortunately I still can’t. There’s really not much I can divulge. The only thing I can say personally is that I was very proud of how we responded internally. We were back up and running very quickly. We had a solid plan. We worked over the weekend, we implemented the plan and it was a success.
We didn’t look at this as a failure by any means. That’s the mindset people have to have. It wasn’t a failure that we got hit with ransomware; it’s going to happen, it happens to everyone. Had we not been able to respond to it and had it crippled our business for a month, that would have been a failure.
The biggest thing I can say is, “Don’t think for a minute it’s not going to happen, or that just the basics are going to help you.” This is a rapidly changing environment. A good security consultant is worth their weight in gold. Hire one and have them look at what you’re doing, talk about what your job function is and what you want to achieve as a company. Get that advice.
RW: Other specific best practices to mention?
Tarr: I’ll probably get in trouble with my boss for saying this, but I’m not a big fan of password changing. Once your password’s out, your password’s out. Password complexity is good, but the 90 days rule may not be very effective. It’s not like a password gets leaked and then they sit on it for six month.
Obviously, education. Make sure that people understand: If somebody sends you a link to something, verify with them, call and say, “Did you send this to me?” Today’s viruses and nastyware always look like they came from somebody you know. Unless somebody says specifically “I’m going to send you this,” don’t open it til you verify that they actually did.
We talked about hardening your automation network. Do not plug it in to the office network at all. And if you do, make sure that it’s firewall blocked and that you’re only opening the ports you need to open. Make sure they don’t touch each other, other than what you absolutely need.
Third, physical security. My server room is locked up because who knows what could happen, sabotage wise or information security wise? Even just curious part-timers can get in and wreak havoc.
Backups. Take lots of backups and verify them regularly. Preferably have a backup offsite. At the very least make sure you rotate and what I call “air gap” backups. Have a backup that’s not connected to anything. Rotate them off. If the infection spreads, you’ve got a good clean backup to the restore from.
Obviously antivirus, those sorts of things. There’s new software specifically for detection of malware or ransomware; they detect the moment that the malware tries to change a file, it throws out a couple of honeypot files and as soon as it touches one of those files and attempts to change it or lock it, it shuts everything down.
Antivirus is great but it’s not a firewall and it doesn’t really do anything for ransomware. It’s kind of one of those “inch deep mile wide” pieces of software. You really want to get specific and look into smart firewall appliances that will stop it at the door. Short of having a security consultant, that’s the next best thing: Have a firewall that inspects the packets coming in, and get something with a subscription to a database that keeps that up to date. If you could stop this stuff from getting in the door, that’s 90% of the issue.
We all think we’re the smartest guys in the room. Engineers are notorious for that. There’s someone out there smarter than us working on what they’re going to do next. Don’t assume that because you’ve read the latest books and read the latest information that you’re safe, because there’s always somebody smarter and they’re always out there trying to wreck your stuff. Part of a complete plan is assuming that it’s going to happen. And if it doesn’t, that’s great. If it never happens to you bless you, but assume that it will and know what you’re going to do, know how you’re going to respond and make it automatic. Write the plan down, make sure everybody knows what the plan is, and then you ready to execute it when it’s necessary.