Essential security practices to protect your business
Merry Cybersecurity Awareness Month! It’s going well, isn’t it? I think we are collectively more aware than we’ve ever been about the risk we face, as consumers and also as professionals.
Why do so many individuals and businesses live in fear of cyber attacks? Many customers I talk to still feel they are vulnerable in a dozen different ways. They might believe parts of their security stack are sufficient but also that attackers nowadays are insidious, capable, and determined. Cyber criminals are motivated by money and power; hacktivists have a cause to champion; state-sponsored attackers’ goals range from espionage to intellectual property theft to political and military impact.
We at CBTS believe that a strong security program that protects your data and assets will involve a basic set of practices that are essential—no matter how large or small, what industry you’re in, or what data you are responsible for. Those practices won’t save you from every attack, but you’ll certainly be better defended against opportunistic, less-skilled adversaries.
The challenge for most organizations is that those practices are tough to start. They require the right tools, people to run them, and rigorous procedures that will ensure their effectiveness. We see businesses start moving in the direction of these practices but over time, devotion to them wanes as other priorities crop up, or other projects demand the attention of the staff.
In it, the crew of the USS Enterprise is subjected to a virus causing them to slowly devolve into other lifeforms—their behavior begins to resemble that of a primate, a spider, a reptile. One of the crew, Lt. Worf, begins to exhibit violent tendencies. After Worf injures another crewmember, he goes into hiding. In command of the Enterprise, Commander Riker wants to find him, but the effects of the virus are affecting Riker’s brain, and he’s not thinking clearly. When Lt. Cmdr. Geordi LaForge comes and asks to help find Worf, the exchange is pretty funny:
LAFORGE: Commander, I’ve got seven security teams out hunting for Worf, but for some reason sensors are having a difficult time locking into him. I’ve called for a level two security alert. Do you think we should go to a Level One?
RIKER: (Pauses, clearly stumped)… I don’t know. What do you think?
LAFORGE: I think we should.
RIKER: Okay. Sounds good. …Then you’ll take care of that…security thing?
LAFORGE: Yes, sir. I will
Often this is what we face as a security services company: Customers having trouble knowing what security practices to implement and how to implement them. This is why we’ve built our Managed Security team—to provide a set of essential security practices to our customers, consumed on an as-a-service basis.
These essential practices—security monitoring, vulnerability management, endpoint protection, multifactor authentication, and backups—should be a part of every company’s core security function. Can you imagine a front door without a lock, or a bank without security cameras? Going into 2022, any business with information that resides on computers connected to a network must invest in these practices or face serious risk of theft, ransomware, and other threats. Interested, but don’t know where to start? We’re having a webcast to talk more about these practices, as well as some tools that work well to map out a strategy to start doing them. Register for the webcast here.
Zero Trust Networks (ZTN): what are they and how do I implement one?
One of the many buzz words in Information Security media today is Zero Trust Networks or ZTN. I like a good acronym as much as the next person (it is easier to type for sure), but it can be hard to understand how you as a CIO can implement a ZTN.
In a sense, a ZTN is what most of us do every day when we walk or drive to an unfamiliar place. Imagine you live in a city or suburb and you’re heading to a new restaurant but you don’t know the neighborhood for this hot new place.
What do you do?
Do you treat this new neighborhood like your own, where you know everyone and know who and what you can trust? No, of course not.
You take some time to get context ( in other words, understanding) about this new place to see if you can safely and easily park your car or lock up your bike or walk to it for dinner. You scope out the area to figure out how safe things are in this new environment.
The new bistro has to scope you out, too. Are you safe? Are you someone who can be trusted to pay the bill at the end of the meal? Do you present a threat to them?
You don’t trust the new neighborhood randomly and they don’t trust you right away either.
How does this play out in the information security space?
The average company today has multiple vendors that either provide a service or are customers that need access to your network/services. As the CIO you have created a very secure, private network, that typically has a VPN for remote access, and you have vendors providing services or consuming services that are outside of your trusted network. See this basic diagram below:
You can make this diagram more complicated with a DMZ, load balancers, web application firewalls, cloud services, and other things, but this covers the basic environment.
Where are the risks to you and your vendors?
There are three basic threat vectors for modern networks.
A user may have compromised credentials that can be used in an attack to gain access to your network or your vendor’s network.
A device may be compromised on your internal network, your vendor’s network, or the remote network. That compromised device can then attack you and/or your vendor(s).
A software system—like an API—can be compromised and that can impact or infect data on your network, the vendor’s network, or the remote workers.
If you think about the number of devices you have, the number of users, and the number of vendors, you can see how the risk to you and your vendor partners has increased exponentially.
Often this is accomplished with tokens (or a security certificate) that are assigned to a user, or a device, or even a program, after identity and authorization have been determined.
How do you create trust and where does this happen?
Imagine a network configuration that says, “I don’t trust any computer, user, or process until that computer, user, or process has provided credentials (for example, username and password or X.509 certificate) that has been validated (usually with some kind of second factor, an SMS text, authenticator push, or Certificate Authority) as authentic.” Only then does the network confirm that the computer, user, or process is authorized to do what they want to do. Yes, this includes traditionally “trusted” assets, like your own workstations!
The requirement to provide credentials and have them validated and then check for authorization is the basis for Zero Trust. The phrase that is often used is, “Trust nothing, verify everything.”
Because we can’t rely on the IP address of a machine to give us some measure of “identity” (in other words, “I trust this PC because it has our internal IP address”), the machines have to be validated. Typically this validation is with a certificate that is pushed out to the device from a centralized Certificate Authority. There are solutions that automate this process and can provide context before issuing a certificate to a device. Context in this case means, “Have I seen this PC before? Do I recognize the MAC address, serial number, or does it have an IP address I recognize?” The more context you have about a device, the more confidence you have that the device can be trusted. Keep in mind, the trust extended to that device is for that session only, or for a predetermined length of time.
Because we can’t rely on the user to provide just a username and password to prove that they are who they say they are, users have to be validated twice. Usually they identify with a username and password, then we confirm their identity a second time with some other method (an SMS or an authorization application like Duo, Microsoft Authenticator, or others). This multi-factor authentication (MFA) helps provide a level of trust that the person is who they say they are. Just like with the device, the authentication of the users is for that session only and the user will have to re-authenticate once they disconnect or end the session.
As your security program matures you can also verify the software or applications that are running on your systems. Here you would most likely have lists of the applications that you trust and you have a hash value of the executables to make sure that the application has not been modified. This can be a bit complicated, but it is possible.
The main takeaway from this blog is that Zero Trust means—as the name implies—that you don’t trust anyone without some method (or methods) of authentication. For those of you thinking strategically, you might want to hold off on upgrading your VPN this budget year or next, and think instead of a Zero Trust solution for your remote work force.
Improve your cybersecurity defense with centralized logging, continued: A deeper dive!
In my previous blog post I talked about the value of centralized logging, a high level, non-complex overview of how centralizing your logs can help you determine if your controls/defense tools are working.
Now I will go a bit deeper with some best practices regarding centralized logging and what other logs you can put in your centralized log server. Before I do, imagine this scenario:
11:00 p.m., Saturday night, over the Labor Day weekend (in the U.S.): Your helpdesk reports that the network is slow in New York City. That is odd, no one is working Saturday in the New York office.
What is going on?
You haven’t implemented centralized logging yet so you call the Operations Team (Ops) and notify them that something is going wrong in New York. You wait for Ops to get back to you. Thirty minutes pass, then you get a text back:
Ops: Yes, there is a problem in NY, it is in the conference room, and someone or something is flooding the network with traffic. The entire network in New York is crawling at a snail’s pace.
Maybe some threat actor is working on a ransomware attack. Maybe someone has broken into the office in New York and is doing a denial of service attack. Maybe that new customer that asked for a demo on the Friday before Labor Day put a Raspberry Pi on the network in the conference room and is scanning the company network.
So, what is going on?
For you or your team to be able to answer this question quickly you need to know what is happening on your network. As you read this post you might start to think, “I can’t afford this John!” and you’re probably right. Information Security likely will not have the budget for centralized logging just for the sake of information security. But, once you have the logs in a central location they can be used for other business purposes. This is not a simple project, but then going to the moon wasn’t simple; nevertheless, it was accomplished. You need a good team and you need to stay focused to reap the big rewards. How, though, do you reap the big rewards?
First, you want to follow best practices, namely, plan ahead and think it through. Planning and thinking through this kind of project will pay off on several fronts, and not just for information security.
Here are some of the things to consider when you say to yourself, “I want centralized logging to improve my information security program.”
Step 1. Create a plan and have a strategy for this project. Do NOT just buy the first centralized log tool you find. Plan for what you want to collect. As part of this planning process you’ll ask questions of your network team and others like:
How big are the daily logs from the web servers, SQL, Oracle DBs, etc.?
What is our network traffic load like (Gigabytes of network logs? Terabytes of network logs)?
How many devices do we want or need to monitor (servers, switches, firewalls, wireless APs)?
From what other systems do we want to collect logs (Anti-virus, home-grown applications, VoIP traffic, printer logs, your Kubernetes farm, etc.)?
What kind of shop are you running? All Microsoft? All Linux? A hybrid?
Besides security monitoring, why are you logging all this information? Application troubleshooting? Customer support? Continuous improvement?
Step 2. Make sure the structure of the logs you are collecting is consistent.
You won’t be able to ingest logs from multiple data sources unless there is a consistent log format. Your network infrastructure devices will have a format—most likely syslog format—and your firewall(s) will likely have a similar format, and then things can get proprietary (ugly, in other words). Remember, you are not just dumping data into an SQL server and then magically extracting useful information and meaningful insight into your network.
Step 3. A brief word about time and relativity and NTP.
This might be obvious, but to be clear, you need to make sure the logs all have the same time. All network devices and computer systems have a clock, so you will get the date and time for events that you are logging. You want to use Network Time Protocol (NTP) to sync all the systems to the same time source or you’ll have problems. Einstein proved that time is relative; for purposes of logging events in a central location for troubleshooting, you need the clocks on your devices set to the same time and time zone. If you have a switch (or two) that think it’s 1990, but you know it’s 2021, you are going to have a real tough time figuring out what happened that Saturday night of Labor Day weekend (note this is itself relative to Labor Day weekend in the U.S. and Canada because Labor Day is different in Australia, Japan, New Zealand, etc.). Threat actors have calendars and know when people are likely to be away from their computers and monitoring systems, so plan accordingly.
Step 4. Make sure each data source has unique identifiers.
If you are searching through log data looking to see what happened Saturday night at 11:00 p.m. Eastern Time, make sure you know that the switch in the server room is uniquely identified compared to the switch in the conference room. Here is an example of a switch log record; note the various fields and values that you want to be able search and index.
You can see lots of good information in that record, but what switch did it come from? You need to be able to answer that question or all your time and effort has been for naught.
Step 5. Keep your production logs and centralized logs separate.
This is probably obvious but I need to state it plainly: The centralized log server does not replace your SQL logs (or Oracle logs or other production logs). When you need to roll back transactions in SQL or Oracle, etc., you are going to use those production logs. The value of the centralized log tool is gathering other insights. I’m thinking security insights (telemetry, correlation, etc.), but it could be troubleshooting a cranky application, dropped VoIP calls, or providing customer support.
Yeah! I’m done! Wait? I’m not?
Well, you’re more than halfway done. You’ve done the heavy lifting of getting your log data organized and centralized so that you can identify problems on your network when they happen. That is great. Now you get to use this new tool to get insight into what is happening on your network.
Flash back to the start of this post and you can see how this tool can help you figure out what is happening.
11:00 p.m., Saturday night, over the Labor Day weekend: The helpdesk reports that the network is slow in New York City. That is odd, no one is working Saturday night in the New York office.
What is going on?
You tell the helpdesk to put in a ticket to Network Operations, and the Ops team opens up the centralized log server and does a query. Sure enough, there is a switch in the conference room that is blasting out a ton of bad packets. Looking a bit deeper, they see it’s an IoT device that has gone bad and is flooding the network with bad packets.
No other alerts have been triggered.
The firewall is not showing unusual activity out of New York, or anywhere else.
The database servers are humming along fine in the server room.
The only problem is this one switch in the conference room.
It’s not ransomware, and you’re not under attacked.
You don’t have to call the CEO or CFO about a possible ransomware.
The Ops team shuts off the port on the switch, traffic returns to normal, the event is logged in the ticketing system, and the New York network person has to replace the bad IoT device Tuesday morning.
Mystery solved, crisis averted, and you can chalk up that win to using the centralized log server to identify the offending switch. And as you continuously improve your cybersecurity posture throughout this year and into the next, it’s all the more reason add the centralized log server to your toolbox.
Improve your cybersecurity defense with centralized logging
In my previous blog post I talked about the MITRE ATT&CK framework and how it helps you determine possible threats and threat actors’ techniques so that you can better focus your limited resources on the more likely threats.
The next question you might have is, “Am I being attacked?” and “Are my defenses working?” To answer that question you need to know what is happening on your network. To know what is happening you need to log activity on your network from a few sources.
Take your typical network that consists of a wired network (the PC connected to the switch) and some wireless laptops (connected to the wireless access point). The switch and the access point connect to a router and then to the firewall.
If you want to know what is going on your network, you want to see the network activity (traffic) that is flowing on the wireless access point, the switch, the router, and the firewall. To do that you have to log what devices and traffic are on your wireless network and wired network, as well the flow of traffic between the wired and wireless network, and the flow of traffic between the router and the firewall.
Typically you would have the access logs or system logs from each of these devices sent to a central collector, called (surprise!) the system log server, or syslog server. Your network would now look something like this:
Now that you are collecting this traffic information on a daily basis, you can then run searches (usually automated) that look at the log data and tell you if some odd or suspicious traffic is on your network.
You can search the syslog server for bad traffic coming from the internet to your firewall and confirm that the firewall is blocking the traffic. Or, you can confirm that you only allow certain kind of network traffic to leave your network to prevent private or sensitive data from leaving your network (think PII, HIPAA, intellectual property, CUI, etc.) via DropBox or Google Drive or Box. By checking the firewall logs you can tell that your data is not leaving your network through the firewall.
You can search the syslog server for unknown devices on the wireless or wired network. You would know which devices should be on the network, because you know what devices you own or have provisioned for your users to use. If a device shows up in the wireless log or the wired (switch) log you then know that you have to find out what that device is. How did it get there? Did someone bring in their own wireless access device so they can get a better signal in their office? Did they bring in a wireless printer so they can print in their office? By looking at the logs for those two networks you can determine that.
Your network team knows if traffic from the wired network should be allowed to flow to the wireless network, or flow the other way around. Maybe you allow that kind of traffic flow, maybe you don’t. Either way with a syslog server you can confirm that only allowed traffic is flowing on the wireless or wired networks by looking at the traffic logs from the router.
This is a simple example to help you visualize how collecting this network traffic allows you to see if the controls (access control lists [ACLs], firewall rules, network access control [NAC] rules, etc.) are working as you expect.
Improve your cybersecurity defenses with the MITRE ATT&CK framework
In my previous blog posts I’ve talked about the NIST CSF, and then I talked about another framework from the non-profit Center for Internet Security (CIS), which has a smaller set of controls to help companies and organizations secure their environment.
I promised at the end of that post that I would talk about the MITRE ATT&CK framework. But first—because I am sure some of you asked—I’ll tackle the questions: who is MITRE and what does ATT&CK mean?
MITRE is a non-profit organization that manages federally funded research and development centers that develop tools and research issues for various U.S. agencies, like aviation, healthcare, DHS, and others. ATT&CK is a framework that helps cybersecurity teams—both red and blue—figure out how threat actors gain access to computers and systems and what they do when they gain access.
ATT&CK stands for Adversarial Tactics, Techniques & Common Knowledge.
Think of it as a playbook that an adversary uses to break into your mobile phone, tablet, computer, or computer system. The ATT&CK framework is like having your opponent’s playbook in a football game. Every organization has limited resources and knowing where to focus your attention helps you utilize your resources most effectively. The framework is free and was first published in 2015, so it is well known in cybersecurity circles.
Here is an example of how to use it:
Imagine you are a nonprofit that supports human rights and because of what you do, you will be targeted by certain threat actors. As a non-profit, you have few resources to devote to cybersecurity, so you search ATT&CK for malicious actors who target organizations like yours and see what techniques they tend to use. The ATT&CK index identifies malicious actors and who they tend to attack. In your search of the ATT&CK site you see that APT18 (targets human rights groups and tends to focus on External Remote Services, like a VPN or a Citrix server rather than phishing emails to gain access to computer systems.
As you review one of the techniques APT18 uses, you find Technique T1133 and read the ways to mitigate that threat.).
You can now focus your limited resources on mitigation techniques for remote services to help block that threat actor.
If you look at APT18, you’ll see that they tend to use eleven techniques to gain access and ATT&CK has those techniques identified and how to mitigate those threats. The framework is useful for beginner, intermediate, and advance security teams because it has the technical depth to grow and mature your security posture.
If you are just starting your cybersecurity journey you will quickly discover that you need to log what is happening on your network and on your computers and systems in order to know what to look for and where. Are you looking for malicious network traffic or unusual activity on your mobile devices and Windows and Mac computers? Are you checking your firewall logs, your antivirus logs, and your system event logs for suspicious activity? If you are not logging that information in a central server you will have a hard time finding the threats to or on your network.
I’ll talk about getting all those log files together so you can go searching in my next blog post.
Getting ransomware-proof, continued: CIS controls for medium-size organizations
In my previous post on the question of being ransomware-proof, I talked about the NIST Cybersecurity Framework (CSF). Some of you, I am sure, Googled “NIST CSF” and found tons of information from NIST on the framework. Then as you looked at the details, you might have been intimidated by the five functions (Identify, Protect, Detect, Respond, and Recover) and the 23 categories, and 108 subcategories. It might have sounded too complicated, too much to bite off, and you might have even wondered, “Where do I start??”
First, that feeling is totally understandable. The NIST CSF is a comprehensive framework. It works well for regulated companies, like banks, utilities, hospitals, etc., organizations that have regulatory compliance that needs to be addressed, that have to protect their customer’s data, and that also have to prove that they have protected that data.
Recall that at the end of that post, I said I would talk about CIS Controls as another framework you can use.
For medium-size companies that may or may not be regulated, or do not have to adhere to a compliance standard, the Center for Internet Security (CIS) Controls might be a better solution. CIS has a set of controls that can be downloaded for free and can be more easily applied to manufacturing, service organizations, retail, schools, and other verticals that are not tightly regulated.
CIS Controls version 8 has 18 categories with safeguards inside each category that map to a particular asset type (like a computer, a software application, company data, or corporate network). The safeguards do a particular function (like Identify, Protect, Detect, Respond, and Recover) for that asset type. Finally, each of these safeguards are tied to an implementation level of 1, 2, or 3, which will vary based on how far along a company is with its security program. Level 1 is if you are just getting started, Level 2 is more advanced, and Level 3 is the most advanced.
You’ll notice that the CIS controls map to the same general categories as the CSF; that’s done intentionally to help companies or organizations understand how they compare with their peers, communicate with auditors, board management, and risk committees.
The CIS Controls are written in easy-to-read language with clear functions and safeguards that are plainly identified and can be implemented at Level 1 with no cost or low cost tools.
Often the topic of cyber security is compared to eating an elephant—daunting and unapproachable—but when you look at the CIS controls you can see how the process is laid out in an understandable way that allows you to start your journey toward a safer and more secure environment.
In my next blog I’ll round out my Framework discussion with MITRE ATT&CK.
If you need guidance to implement or upgrade your cybersecurity program, contact the security team at CBTS. We can help your organization get ransomware-proof and stay that way.
Read more from CBTS Consulting CISO John Bruggeman:
What do new TSA requirements mean for the security of your critical infrastructure?
The Transportation Security Administration (TSA) announcement in May regarding new requirements for owners and operators of gas pipeline operators is an indication that the federal government is not going to take a light approach regarding cybersecurity. Rather than making recommendations they are issuing requirements.
The change follows the attack on Colonial Pipeline in mid-May that crippled nearly half of the fuel supply for the east coast. There have been previous attacks on other critical infrastructure in other countries like Saudi Arabia in 2018 and several attacks on critical infrastructure in the Ukraine, most recently in December 2016 when power was cut in parts of Kiev.
Clearly the risks to critical infrastructure have never been higher and the federal government is moving forward with new rules for all critical infrastructure as noted in this recent fact sheet.
So what should you do?
Plan to follow the rules just released by the TSA for gas companies because they will likely soon be applied to your industry:
Appoint and identify, within seven days, a cyber coordinator (and a backup cyber coordinator) who is available to the Cybersecurity and Infrastructure Security Agency (CISA, part of the Department of Homeland Security) officials 24×7.
Report all cyber intrusions to CISA within twelve hours of the incident.
Develop and implement a contingency and recovery plan for cyber intrusions.
Compare the plan with DHS standards, identify gaps, develop measures to fill them, and gain approval for them from the CISA.
Use a cybersecurity framework to provide a roadmap for fixing the problems or gaps that you discover from step 4. Using a framework will help you and your team prioritize and address the biggest risks first.
You should also consider joining the appropriate information sharing and analysis center (ISAC) for your industry. There is one for electricity called E-ISAC, plus others for industries like healthcare, financial services, communications, aviation, and chemicals. You can find more about them here at the national ISAC organization. If you need more help, contact the CBTS Security practice.
Can you be ransomware-proof? Is that even possible?
Wouldn’t it be great if you had your information security program at the point where you had confidence that if a criminal gang attacked you, you would be able to defend yourself, keep your business going, notify the appropriate legal authorities, and any vendor partners that might be impacted?
Yes, it would be, and yes, it is possible. Getting to that point is the goal of a mature security program. With a mature security program you are able to keep your business running even while you are attacked or recovering from an attack.
The question is, how do you get to the mature state? What does it take?
Many business leaders assume they don’t have enough budget or resources to achieve that level of cybersecurity capability. How do you start down the path of a having a robust, mature information security program?
First, you make information security a priority. Your Board agrees, and you make room for it in your budget and in your business plan.
Second, you choose a framework for your security program that works for your organization.
But what is a framework?
An information security framework is a series of documented processes that define policies and procedures around your implementation and ongoing management of information security controls in your company. NIST CSF, CIS Controls, COBIT, or ISO 27001 are blueprints for building an information security program that allows you to manage risk and reduce vulnerabilities.
Over the next few blog posts I will take a look at these frameworks at a high level so you can figure out which one makes sense for your company. I will start with the NIST CSF.
NIST (National Institute for Standards and Technology) is a government-funded agency that works for you and me to set standards that we use every day. NIST lets you know you are getting 1 gallon of gas when you fill up your tank rather than .99 gallons of gas or .95 gallons.
NIST has THE gold standard for weights and measures. They also have the standard for encryption technology, and they gave us AES encryption,  which virtually everyone uses today to secure transactions online.
Acting on presidential orders in 2013, NIST—working with private industry—studied the problem and developed a guide (the CSF framework) to help companies manage and reduce cybersecurity risk. One way to think of the framework is by the five core functions it describes: Identify, Protect, Detect, Respond, and Recover. Each of the functions helps guide an organization to think clearly about what they have, how to protect it, how to detect if something bad happens, how to respond, and then recover.
Frequently companies consider these five functions to review the questions asked in each area (the total number of questions is just over 100) to see how they are doing in that area. The language is understandable and consistent so that the whole team is on the same page.
Using the five core functions as focal points for your attention, you can then begin to build your security program using consistent, understandable language that you, your team, and the board can understand.
In our next blog I’ll talk about the CIS Controls as another framework you can use.
How do you ensure the security of your supply chain?
Over the weekend another major crypto ransomware attack occurred, this time through an enterprise software vendor called Kaseya.
For many CEOs or business owners, that name might not be familiar, since many of the companies that use this software are Managed Service Providers (or MSPs). The MSP uses the Kaseya software to manage their client’s computers. This kind of attack allows the cyber criminals to maximize the damage by attacking not just one or two victims, but instead attacking one company that has connections to hundreds of other companies.
So what should you do if you have been impacted by this criminal attack? I’ve had similar considerations in my time as a security leader—here’s my take.
First, if you have cybersecurity insurance, hopefully you have called your insurance provider and you are working with them to obtain the necessary resources to get back up and running.
Second, once you have a minute to stop and think, review what other vendors you depend on to function as a company.
Do you have a payroll provider? If so, you will want to assess the maturity of their security program— perhaps by examining the results of an independent audit, such as a SOC Type II report, to see how they are protecting your data.
Do you have vendor partners who have access to your company network? If so, you want to review how they protect their networks from cybercriminals so that if they are attacked, you don’t become a victim as well.
Do you use an MSP to help you manage your computers? If so, you also want to understand the measures they take to protect you from cybercriminals. Do they require multi-factor authentication (MFA) to access your network? Do they regularly update their computers and network to prevent attacks by cybercriminals using known vulnerabilities? Are they doing the same types of risk reviews you are with their own third-party service providers and vendors? There’s a lot to consider when assessing the security of your supply chain. If you have questions about cybersecurity insurance, what a “SOC Type 2 audit” is and how to interpret the report, or how to know if your MSP is protecting your data, contact the CBTS Security practice.
John is a veteran technologist, CTO and CISO. He has nearly 30 years of experience building and running enterprise IT and shepherding information security programs towards maturity, based on industry standards like ISO27K and NIST CSF, as well as regulatory compliance requirements from PCI-DSS, HIPAA, FERPA, A133 and GDPR.
John has several GIAC certifications (GSEC, GCIH and GCWN) and has been active in the local information security community, through groups like Infragard and the Higher Education Security Council for EDUCAUSE. He holds BS and MA degrees from Xavier University and has served as an adjunct professor at Xavier and the University of Cincinnati.
Cybersecurity Guidance from the Top
Seems like nowadays, everybody’s got an opinion on how to protect your data and assets from threats like ransomware, supply chain attacks, and good old exploitation of vulnerable Internet-facing services.
That’s not really a bad thing, to be honest. At the heart of any responsible, mature security program is a set of fundamental principles—least privilege access, defense in depth, etc.—as well as basic practices like vulnerability management and security monitoring. The more voices we have urging organizations to adopt them, the better.
One significant voice in the last few months has been the White House. In May, we saw the President issue an executive order directing new security requirements for federal agencies as well as their suppliers. Key among these requirements:
Service providers will have to share information about threats they’ve observed and breaches they’ve experienced, and to store logs and telemetry for use in breach investigations.
Suppliers of software to the federal government will have to adhere to new requirements around secure software development. They will need to use administratively-separate build environments, audit trust relationships, and implement risk-based multifactor authentication (MFA). Additionally, they will need to document and minimize software dependencies in the build process, use encryption, and monitor the environment for threats.
Federal agencies themselves will have to migrate to a zero trust network architecture, roll out endpoint detection and response (EDR) tools, and implement MFA and stronger encryption on data at rest and in transit. Furthermore, they will have to adopt a new framework to share threat and incident information with each other.
The technologies listed here—MFA, EDR, and zero trust—are more than just fancy new industry buzzwords (although they sure are used that way). They represent some of the most effective modern security controls available. It’s encouraging to see the White House push their use.
The Biden administration has been vocal about the recent spate of high-profile ransomware attacks, too. In response, Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology, published a memo to business leaders—not just federal contractors, but any business operating a computer network—urging them to invest in some of these same technologies.
The guidance lays out a set of valuable practices that can help address ransomware as well as many other potential threats:
Implement MFA, to protect against stolen credentials.
Implement EDR, to identify suspicious activity in your environment and respond quickly.
Encrypt your data (note that while ransomware attackers also encrypt data, this control prevents them from publishing stolen data, a more common tactic observed by these attackers).
Patch your operating systems and applications.
Back up your systems, test the backups, and use offline backups.
Run tabletop exercises to test your incident response plan.
Use a third-party penetration testing firm to determine if your defenses will withstand an actual attack.
Segment your networks to limit internal access to critical systems and data.