Finally. Finishing up after Part 1 and Part 2, this is the end of my updated thoughts on an old Server Fault post with some final thoughts on reducing risks in the future.
Reducing the risk in the future.
The first thing you need to understand is that security is a process that you have to apply throughout the entire life-cycle of designing, deploying and maintaining an Internet-facing system, not something you can slap a few layers over your code afterwards like cheap paint. To be properly secure, a service and an application need to be designed from the start with this in mind as one of the major goals of the project. I realise that’s boring and you’ve heard it all before and that I “just don’t realise the pressure man” of getting your beta web2.0 (beta) service into beta status on the web, but the fact is that this keeps getting repeated because it was true the first time it was said and it hasn’t yet become a lie.
You can’t eliminate risk. You shouldn’t even try to do that. What you should do however is to understand which security risks are important to you, and understand how to manage and reduce both the impact of the risk and the probability that the risk will occur.
Risk management for those who have never heard of risk before. I think this latter part of the original post reads as rather naive now… and yet people still seem to struggle to break out of the traps that this advice is designed to prevent.
What steps can you take to reduce the probability of an attack being successful?
- Was the flaw that allowed people to break into your site a known bug in vendor code, for which a patch was available? If so, do you need to re-think your approach to how you patch applications on your Internet-facing servers?
- Was the flaw that allowed people to break into your site an unknown bug in vendor code, for which a patch was not available? I most certainly do not advocate changing suppliers whenever something like this bites you because they all have their problems and you’ll run out of platforms in a year at the most if you take this approach. However, if a system constantly lets you down then you should either migrate to something more robust or at the very least, re-architect your system so that vulnerable components stay wrapped up in cotton wool and as far away as possible from hostile eyes.
- Was the flaw a bug in code developed by you (or a contractor working for you)? If so, do you need to re-think your approach to how you approve code for deployment to your live site? Could the bug have been caught with an improved test system, or with changes to your coding “standard” (for example, while technology is not a panacea, you can reduce the probability of a successful SQL injection attack by using well-documented coding techniques).
- Was the flaw due to a problem with how the server or application software was deployed? If so, are you using automated procedures to build and deploy servers where possible? These are a great help in maintaining a consistent “baseline” state on all your servers, minimising the amount of custom work that has to be done on each one and hence hopefully minimising the opportunity for a mistake to be made. Same goes with code deployment – if you require something “special” to be done to deploy the latest version of your web app then try hard to automate it and ensure it always is done in a consistent manner.
- Could the intrusion have been caught earlier with better monitoring of your systems? Of course, 24-hour monitoring or an “on call” system for your staff might not be cost effective, but there are companies out there who can monitor your web facing services for you and alert you in the event of a problem. You might decide you can’t afford this or don’t need it and that’s just fine… just take it into consideration.
- Use tools such as tripwire and nessus where appropriate – but don’t just use them blindly because I said so. Take the time to learn how to use a few good security tools that are appropriate to your environment, keep these tools updated and use them on a regular basis.
- Consider hiring security experts to ‘audit’ your website security on a regular basis. Again, you might decide you can’t afford this or don’t need it and that’s just fine… just take it into consideration.
Again, I don’t have much to add to this, except to say that it feels that this hasn’t aged that well. In the era of ‘Move fast and break things‘ the idea that we have time to do most of this seems ludicrous. Of course we’re going to have a smooth deployment route for software from dev to test to prod (Of course, everyone has a test and dev environment, but not everyone has a separate production environment) . Of course we’re going to re-architect all our functions at the drop of a Hacker News post about a new framework… but we don’t have time to be formal about it.
Again, we’re back to easy deployment of both systems and code, making both platforms disposable and easy to replace. This isn’t the holy grail of never being a victim of intrusions either, but it can help to mitigate some of the risks if it’s done well and you are certain your developers and the dev/test environments are secure.
What steps can you take to reduce the consequences of a successful attack?
If you decide that the “risk” of the lower floor of your home flooding is high, but not high enough to warrant moving, you should at least move the irreplaceable family heirlooms upstairs. Right?
- Can you reduce the amount of services directly exposed to the Internet? Can you maintain some kind of gap between your internal services and your Internet-facing services? This ensures that even if your external systems are compromised the chances of using this as a springboard to attack your internal systems are limited.
- Are you storing information you don’t need to store? Are you storing such information “online” when it could be archived somewhere else. There are two points to this part; the obvious one is that people cannot steal information from you that you don’t have, and the second point is that the less you store, the less you need to maintain and code for, and so there are fewer chances for bugs to slip into your code or systems design.
- Are you using “least access” principles for your web app? If users only need to read from a database, then make sure the account the web app uses to service this only has read access, don’t allow it write access and certainly not system-level access.
- If you’re not very experienced at something and it is not central to your business, consider outsourcing it. In other words, if you run a small website talking about writing desktop application code and decide to start selling small desktop applications from the site then consider “outsourcing” your credit card order system to someone like Paypal.
- If at all possible, make practising recovery from compromised systems part of your Disaster Recovery plan. This is arguably just another “disaster scenario” that you could encounter, simply one with its own set of problems and issues that are distinct from the usual ‘server room caught fire/was invaded by giant server eating furbies’ kind of thing.
Good business principles that I hope that most people who have a formal web presence are following for their Internet-facing systems. I still get lots of requests for help from people working in smaller businesses where this just isn’t done… until it’s too late.
I think the main shift in managing intrusions has been to data being the most important thing. Remember your legal obligations to the people whose data you hold, and remember to work on your deployments.
If you are scared to rebuild any system or service in your business then this is a critical weakness, both in terms of intrusion and in terms of risk to the business if it fails through “natural” causes.
Outsource where applicable. Microsoft Azure, Google Cloud and Amazon AWS are highly likely to be better than you at standing up virtual platforms for public web servers (don’t take this personally, they’re better than me too). Microsoft and Google are highly likely to be better than you at managing email services (again, don’t take it personally, me too).
Each time you outsource intelligently you’re buying into improved security and freeing up your time from mundane tasks to think about how to manage and protect the things that really matter; your business, its customers, and all your data.