Let’s face it – the nuts and bolts of security can be dull as any other technical topic. But unlike some subjects, security is vitally important to your code, your company and the well-being of the Internet. This is not an all-encompassing discussion about secure coding, but rather an overview to show you how to make your code more secure. This post also highlights the most important things to keep in mind while designing and coding secure applications. The following information applies to all applications, but is particularly important for web applications.

This topic is top of mind for many people. I was quoted in a recent article in MRC’s Cup of Joe, talking about login protection for an application and that each user should have a role in the system that defines what they are allowed to do, and more importantly what they are not allowed to do. (See tip #4 below and to read the full article click here.)

The thing about personal computers and devices is that, well, they’re personal. All the “nannyware” and firewalls and network domain restrictions IT puts into place are not the same as someone watching users 24×7. A person has the ability to try whatever they want to while running their device and (in the case of personally owned devices) can install whatever they want on them. They can even monitor their official device and use another, less controlled device to emulate it.

This leads to the first rule: Trust no one.

1. The Door is Not Always Well Guarded
A typical user application requires input, and this is typically done via input forms of some sort. However, just because the field label says “Name” does not mean a malicious user won’t try to put soup, nuts, and bizarre character combinations in that field.

This leads to the second rule: Filter All Input. If a field is meant to hold a name, it should not accept numbers, symbols, or most punctuation (singles dashes and apostrophes being the common exceptions). If the field is meant to hold a phone number or tax ID like SSN, it has no need for punctuation at all.

The best and most foolproof way to filter input is to use a white list. If you enumerate every possible valid input character, there’s no way a potential attacker can sneak in something they shouldn’t. In some ways a black list seems easier to implement, but it is generally not as comprehensive or foolproof. Input attacks are used for injection attacks, which is currently the number one trouble category according to the OWASP organization. If you take nothing else away from this article, remember this one maxim: always filter input!

2. The New Window Was Not Locked
In general, every feature or input field added to an application increases it attack surface area, which increases the risk a vulnerability is being added. Using a shared module or library to perform common tasks such as input filtering helps, but every line of code added to an application increases the statistical probability that a new vulnerability has been added as well. Adding new functionality that serves a different class of user can introduce subtle risks having to do with permissions and data exposure. Adding a search function where none is required can leak data unexpectedly. We developers tend to be an inventive bunch, and often fall into the “wouldn’t it be handy to add this capability…” trap. To reduce risk, minimize this surface area: only add fields and functions that are required, and again, filter all input.

3. When You Move In, Change the Locks
Default settings are the bane of security audits. In particular, settings that are appropriate and even necessary in a development or testing environment can be fatal in a production situation. In addition to making sure that the production environment is secure in the traditional sense, applications and infrastructure should be configured with the most secure defaults, and these should only be changed when necessary. Default accounts and passwords must be changed.

The classic example of too much access is installing a new operating system on a PC: The installer is admin or root by default. While a user with this level of access is obviously required for maintenance and updates, this should not be the user account one uses day-to-day on their computer. If a piece of malware gets onto your system and it’s running with the permissions of the user, it’s much better if that user does not have unfettered access to do anything on the computer.

4. Only Selected People Have Door Keys (tip was picked up in Cup of Joe)
If an application is protected by login, each user should have a role in the system that defines what they are allowed to do, and more importantly what they are not allowed to do. Each user should only have sufficient privileges to perform tasks they need to do – and no more. This principle is called least privilege, and is very important to embrace in order to keep your application and infrastructure secure. At minimum, a normal role and a supervisor or administrator role should be defined and enforced, so that normal users cannot accidentally or intentionally view, change or delete data they should not have access to.

This may be critically important if your application stores or accesses personally identifiable information (PII). Many health and banking industry regulations (rightly) require that this information be carefully protected. This issue points back to our first maxim: trust no one.

5. The Guard Stepped Away
Developers sometimes put too much trust in the infrastructure they’re working within. If there is a password or single-sign-on system gating access to the system, then we think all should be safe and secure. However, even the best defense mechanisms may fail or be bypassed. The guard at the door will have to step away from his post on occasion.

The more controls you put in place around personal and confidential information, the better the chances that an attacker will be thwarted or at least slowed down if one of your central defenses fails. This principle is known as defense in depth. Approaches to keep in mind include multiple means of authorization, auditing controls, and explicit authorization checks on every application page.

6. The Drawbridge Won’t Go Up
Everything made by man fails sooner or later. This truism is the reason for our next principle: fail securely. If the chain breaks, the drawbridge cannot be raised. Design systems so that when they fail, they prevent access by default. If the authorization system fails, it should fail in a manner such that nobody has access rather than the opposite. This makes perfect sense when you think about it, but we often inadvertently enable the opposite unless we code carefully.

Catch all exceptions, provide reasonable error messages to the user, and ensure all options default to the lowest level of privilege. Not catching exceptions can also lead to information leakage. The classic example is the Java stack dump, which neatly lists many of the modules and frameworks being used by your application!

7. The Enemy is Out There
An attacker will use her skill and imagination, trying fifteen different ways of mangling input or sidestepping authorization controls. The next attacker will try fifteen more. We cannot anticipate every possible method of attack those in the outside world will come up with. Therefore, we must keep the basic security principles in mind when coding, testing, and configuring our applications. Our job as developers is to make the bad guys’ job as difficult as possible — in addition to all the other requirements we have in hand when starting a project or updating an existing application. A security defect is a bug as certainly as any other functional defect is, and needs to be avoided as much as possible, and eliminated whenever discovered.