Software needs to do more than just work; it needs to work well under load, with changing requirements, and with sub-optimal users. There are entire sub-fields of software engineering that aren't directly about coding - quality assurance, performance engineering, business analysis, etc. I'm particularly interested in two of these computer science sideshows - security and usability. As a recent user experience intern, and a former security consultant, I've had the chance to look at how these two software factors get applied in the real world, and I've found several common threads.
1. Resistant to “peanut butter”
In 1993, two University of Colorado professors wrote "Task-Centered User Interface Design", a book discussing the need to consider usability through the development process. They advised against building software and "in the final stages of development, spread the interface over it like peanut butter." Keeping users in mind early and often in the development process is an oft-repeated principle of usable software design.
In software security, the same theory holds - consider Microsoft, DHS, and OWASP, all of which advocate building security into the software development process.
Addressing security and usability throughout the development process makes sense. They are both inherent properties of software, not just coding techniques, and issues can stem back to requirements. Unfortunately, for several reasons, including those outlined below, people still treat security and usability as an afterthought.
2. Hard to quantify
Security and usability defects, unlike performance and functional defects, can be difficult to measure. One of the most obvious explanations is that they aren't easy to test directly - penetration tests and usability tests are small-scale and contrived exercises compared to the armies of attackers and users who will eventually attack and use the software. Another cause is the subjectivity of severity of usability and security issues. Usability metrics like GOMS and security metrics like CVSS aren't widely used and don't seem to be very practical. I think one intractable piece of quantifying these practices is the huge dependence on context. Most usability and security assessments rely on understanding people, in addition to understanding technology. People are hard to predict, and their opinions are subjective, which makes definite statements about their behavior difficult to make.
3. Goldilocks dilemma, AKA not too much, not too little
One of the things I learned early on in security is that there is no such thing as 100% security - and even if there were, it wouldn't be desirable. There are diminishing returns, and security involves tradeoffs, usually including at least time and money. Unless you have an infinite budget, and it's your job to make the most secure system in the world (e.g. the military), then you have to settle for some amount of risk. At Foundstone we used to call this "secure enough to do business". Usability follows the same principle - you don't want to under- or over-do it. For example, IDEO had a television showcase where three highly talented cross-functional teams designed a shopping cart. It seems unlikely that most shopping cart buyers would see a return on investment from this kind of design process. 100% usability seems absurd by definition - there will always be compromises in user interface designs. I'm not going to say people should only make software usable enough to do business, but anything extra may not be obviously worthwhile.
4. Tradeoffs
A classic tradeoff is security versus usability. If you make systems harder for attackers to exploit, you make them harder for users to use. Strict password policies, short session expirations, and CAPTCHAs are examples of how security impacts usability. But security can have other tradeoffs, like performance, privacy, or even functionality.
Similarly, usable software may need to sacrifice features that confuse users. It may limit users choices around privacy and security, and it could even impact performance if certain tweaks and configurations are hidden from users.
As mentioned earlier, both security and usability cost money and time. That's why cost-efficient techniques like discount usability engineering and threat modeling are so effective - full-scale reviews that address every corner of the practice are simply out of reach for most organizations.
5. No checklist
Because software security and usability are subjective, contextual, and dependent on human factors, there is no comprehensive one-size-fits-all checklist for security or usability reviews. The appeal of a silver bullet is clear, and while heuristic guidelines in security and in usability are popular and useful, each software system is different enough to require more than rote checklist-level analysis.
6. Who’s in charge?
Another challenge to building secure and usable software is determining responsibility, especially in large teams. Until about a decade ago, IT security teams were focused on securing networks and third-party software packages against well-known attacks rather than building secure software. Even now, software security-specific personnel in organizations are fairly rare. When these resources exist, deadlines in development can take precedence over security concerns.
Usability departments face a similar challenge - they have a large amount of work to do, limited resources, and need to win political battles when engineers have to meet deadlines. The way both of these organizations tie in to a large software development or IT department and the power they have seems to vary from company to company. The difficulty getting corporate buy-in may stem from multiple causes including difficulty proving cost-effectiveness and distance from the revenue stream.
7. Everyone’s got an opinion
You rarely hear about people getting into debates about performance or quality issues. The subjectivity of security and usability means that two rational people can disagree about a fundamental issue. Some security people are incredibly paranoid, while others may not worry enough. Usability professionals may prefer simple interfaces, or believe simplicity is overrated. There are shades of wrong and right, but precious little that is indisputable (except for anything from Steve or Bruce, obviously).
8. Hard to see value
The results of security engineering often go unnoticed. Good security controls, like good interfaces, are effectively invisible. Unfortunately, bad security and usability may go unnoticed as well. A site that spews out spam or gets targeted by phishers may never realize its problems. Clear text passwords or credit cards may be compromised without raising an eyebrow. Similarly, software that has serious usability problems may slowly lose users to competition without the creators ever realizing why.
This lack of insight has led to a tendency of some to assume that security and usability don't matter. It's clear when a customer doesn't buy software because it lacks a feature. But the risks and rewards of insecure or unusable software can be more difficult to see. When push comes to shove, security and usability can end up on the losing side.
9. Need {social, operational, financial} context
As I mentioned earlier, security and usability depend on context. As we used to say in our security courses, the security requirements of your kid sister's diary are different from the security requirements for a nuclear bomb silo. Similarly, the required usability for a shell script is different than a search engine. There are no hard and fast rules, only principles, because each system operates in such different contexts.
10. Require “expert” evaluation
Finally, usability and security both require expertise in order to get right. While people have some gut feelings about the subjects, the best security and design minds have a huge amount of experience behind their insight. I think this is why there is such strong design and security communities - people learn from each other, and bring that knowledge to those outside the communities as "specialists". As with any top X list, this is just one of many perspectives and there are undoubtedly counter-examples. I've also thought of some things that are very different between usability and security. One gives joy, the other confidence. The risks associated with bad usability are usually long term; security risks can be short term. Certainly, the two communities have different cultures and values. But there are also undoubtedly numerous techniques that can be shared between the two fields, and they deserve exploration since security and usability are both so critical to modern software.