The Ops Community ⚙️

Cover image for Fostering security awareness
Andrew Owen
Andrew Owen

Posted on

Fostering security awareness

This article originally appeared on my personal dev blog: Byte High, No Limit.

Today's post is based on a presentation I gave at a security conference in the 2010s. It's a bit longer than what I would normally share, but I think it's still relevant, possibly more so than when it was originally written.

Introduction

This article is aimed at a non-technical audience. The aim is to raise awareness of the threats that often get overlooked when hardening software and hardware. In practice, you can only ever mitigate against security threats. For example, Bryan Dye once told the Wall Street Journal that Symantec, then the biggest antivirus vendor, was getting out of the antivirus business because the software stopped at most around 45 per cent of viruses. He said the money was no longer in “protect”, but instead in “detect and respond”.

Or consider the compromising of RSA’s SecurID. The theory at the time was that a nation state was trying to get access to secrets at a military aerospace vendor, but was blocked by the vendor’s use of SecurID. So instead they sent targeted email to RSA employees, which enabled them to breach the SecurID security and get what they were really after. RSA took a lot of criticism for how it responded to this attack. As an aside, that’s why it’s important to train your staff to recognize phishing emails.

Say you don’t have a disaster recovery plan. When the worst happens, even if you changed all your passwords, you know you’ll have to do it again after all the services you use have got their new keys in place. And you’ll still wonder if anyone managed to leave any snooping software on those services while the keys were compromised. The heartbleed OpenSSL vulnerability is an example of the worst happening without a disaster recovery plan.

As IT professionals, when we talk about security we’re mostly talking about confidentiality, integrity, and availability (CIA) of data. We don’t want confidential data leaving the organization, so we enforce a trusted device policy to ensure all BYO devices have their data encrypted and can be remotely wiped. We block the use of file sharing applications like DropBox that can lead to confidential data being stored in the public cloud. And we provide users with alternatives that keep the data within the corporate network, because users really like DropBox.

We lock down all the USB ports, because corporate spies have started sending out free mice with hidden malware to employees (I’m not making this up). And we use access controls to ensure people only have access to the information they need to do their job. We look after data integrity by making regular backups, and we do periodic restores to make sure those backups are working. And we make sure the data is available by doing system maintenance while the west coast of America is asleep. Ok, so outside of California your mileage may vary. So assuming you’ve done everything you should to secure your software and hardware, what have you missed? Well, I’ll get to that later.

Case study: the payment card industry

I’ve been interested in security since the late 1980s when I got my copy of Hugo Cornwall’s Hacker’s Handbook, where I discovered the existence of the Internet, or ARPAnet as it was known back then. Prior to joining the security business, I worked for a retail software company, where I discovered all sorts of frightening things about how card payments are processed. For instance, did you know that when chip and PIN payment was originally introduced in the UK that there was no encryption between the mobile radio units and the base stations? Thankfully, that’s now been resolved.

Or did you know that all the card payment transactions in high street stores used to be stored and sent unencrypted to the banks? Now the reason for this was that, as I’m sure you can imagine, there are a very large numbers of transactions throughout the day’s trading. Traditionally, these were sent to the bank at the end of the day for overnight processing. You’ll be glad to know that these were sent over a dedicated line rather than the public Internet.

But even so, they were sitting on the host system without any encryption. And the reason for that was that the overhead added by decrypting each transaction. Because they would all have to be individually encrypted and decrypted to work with the batch processing system at the banks. It would have added just enough delay to ensure that eventually the system wouldn’t be able to keep up with the number of transactions. Payments would be going into the queue faster than they could be processed.

Now, you may have heard of PCI DSS (the Payment Card Industry Data Security Standard). And what, among other things, that standard says, is that organizations have to restrict who has access to the folder with the card payments in it. And so already we’ve gone beyond the software and hardware, and we’ve got a security policy, the PCI DSS, and that policy is based at least in part on trust. If you want to read more about trust, I recommend Bruce Schneier’s book Liars & Outliers.

What I want to get across here, is that software and hardware are just part of the security solution. All retailers in the UK are supposed to be audited for compliance with PCI DSS. But according to Financial Fraud Action UK, card fraud losses in the UK for 2013 totaled £450.4 million. Now that sounds bad, but it to put it another way it’s equal to 7.4 pence for every £100 spent. And the things we have to consider here are the risk and, the cost of mitigating that risk.

The payment card industry wants to keep fraud down, but if putting in place a solution that eliminates fraud costs more than the cost of the fraud itself, then it will look for a cheaper solution. So actually, even before you secure the box, you really need a security policy. Because if there’s nothing of value in the box, then you don’t really need it to be that secure. But if what’s in the box is the most valuable thing you have, then you really need to be able to deal with a situation where all of your security measures failed.

The importance of a security policy

So, although that was a bit of a roundabout way to get to my point, what I’m advocating is that organizations need a security policy. And vendors of security solutions, need to help their customers to think about security in this way. So what makes a good security policy? Well first of all you need to have someone with the responsibility for the policy, the chief security officer (CSO). And one of their most important responsibilities is to keep the policy under review, because the environment is changing all the time, and a static policy can’t address that.

So how do you come up with a good security policy? Well, there are various things you need to take into account. But primarily it’s about working out the risk: How likely is it that someone will walk out of this facility with all this government data on a USB pen drive? And the cost: what will be the effect if this confidential information about everyone we’re spying on gets into the public domain?

So for each risk, you work out the associated cost, and then you come up with a solution proportionate to the risk. Let’s go back to the early days of hacking. I’m not sure anyone ever calculated the risk of hackers going dumpster diving for telephone engineer manuals. But I’m reasonably confident that the cost of shredding all those manuals set against the risk of someone typing the whole thing into a computer and uploading it to a bulletin board system was fairly high. Now this is in the days before cheap scanners, good optical character recognition and widespread access to the Internet, which is why everyone now securely disposes of confidential documents, don’t they?

Now, in the Snowden case there were a couple of things that surprised me. First, that the NSA wasn’t using mandatory access control (MAC). Or in other words, they weren’t using a trusted computing solution. They were using the same operating systems as the rest of us. I think partly that can be explained by the fact that it’s expensive to get support for trusted operating systems, because almost no-one besides governments use them. And often the applications that governments want to run aren’t available on those platforms, so the cost of using them may exceed their benefit in mitigating risk. But the other thing that surprised me is the practice of password sharing.

And that brings me to the main vulnerability you face if your hardware and software are secure. Your users. Kevin Mitnick, I’m assuming you’ve heard of him, if not look him up. He asserts, and I don’t disagree with him, that humans are the weakest link in security. In fact, I recommend his book The Art of Deception if you want to know exactly how predictable and easy to manipulate people are.

So let’s look at the password sharing issue. If you put up a big enough road block for your users to getting work done, they will find a detour around it. Is it easier to tell someone your password than jump through hoops to get that one file they need? Many companies have a policy that passwords need to contain at least eight alphanumeric characters, both upper and lower case letters, at least one number, and at least one special character. It also can’t be one of the previous three passwords. So what do users do? They pick dictionary words with substitutions.

And then users have to change their password every six months, or quarterly if it’s an administrative password. This leads to one of two things. They write the passwords down. Or they repeatedly change their password until they cycle back to their original password. It’s pretty easy to get a valid corporate username. They’re in all of our email addresses. If you can actually get on to a corporate site and physically connect to the network, you can just keep trying to connect until you brute force the password.

So how do you get on site? Well, this touches on the other main vulnerability, physical security. Many companies use employee badges for building access, and various areas are restricted to specific groups of employees. They have a policy of not holding the door open for people employees don’t recognize. Unfortunately, it is in most people’s nature to be helpful. If I smile at someone as they go through a door, and I’m dressed appropriately, they’re less likely to question if they should have just let me follow them.

Mitnick’s book is full of these kinds of social engineering techniques. But actually the easiest way to get on site at some companies is to sign up for a training course. You might have read in the news earlier this year about the gang of crooks who stole £1.25 million by going into bank branches and attaching KVM (that’s keyboard/video/mouse) switches. Reports haven’t detailed how they got into the building, but it’s safe to assume it was low tech, and they didn’t break in.

So you need to educate staff about threats. Phishing email, social engineering, not picking up USB pen drives that you find lying around and connecting them to your corporate PC. I'm not even going to cover BYOD. That’s “Bring Your Own Device”, although some have called it “Bring Your Own Disaster” because of the additional risks and management headaches it entails. I will say that the mitigation is to require BYO devices to meet a minimum level of protection: a secure password, encrypted storage, the ability to do a remote wipe. But basically, the message is that it’s all very well having a security policy, but it isn’t much use if your staff don’t know about it.

Once you’ve got a policy in place, then you need to stress test it. This is where the “red team” comes in. This can be an internal group, or an externally hired group, whose job is to attempt to penetrate your security, for instance by leaving USB pen drives lying around or sending test phishing emails. Penetration testing needs to be conducted on a regular basis, the frequency of which will depend on the risk and cost analysis, and the security policy updated following the findings.

But let’s come back to physical security. In the aftermath of hurricane Sandy, it seems fairly obvious to state that if you’re doing offsite backup to multiple data centers that at the very least you don’t want them co-located in the same flood plain. Of course since then everyone has looked at where their critical services are and ensured sufficient redundancy to deal with a major disaster. Haven’t they?

Assuming you’ve got the location sorted out, and you’re outside the 500-year flood plain, you’re going to want to consider alternate power sources, given the increasing demands being placed on the power grid. And when you’ve got your failover power supply in place, it helps to test that it actually works. Your backups are only as good as your ability to recover from those backups, so it’s important to perform regular testing to make sure that’s the case. Physical access can be controlled by physical barriers, locks, guards, but it can also be monitored by video cameras. Servers get hot, so you need to consider fire suppression systems. Ideally, ones that will leave the data in a recoverable state.

Summary

I’ve barely scratched the surface, but hopefully I’ve given you some things to think about. So to sum up: you want a security policy that is under continual review and covers:

  • Human nature
  • Disaster recovery
  • Physical location
  • Penetration testing
  • Social engineering

And really the most important thing is to raise security awareness.

Latest comments (0)