Brandon's Blog

11/26/2007

Trust

Upon beginning a required online Ethics & Compliance course, I was asked to verify my identity.  First question that arose in my mind was: how do they know I’m not lying?

Probably result of web application development, where failing to assume your user is Snidley Whiplash reincarnate would constitute a weak security architecture.

Web apps are a bit of a culture shock for any developer accustomed to the comforts of the standalone application.  The cornerstone of web security is the concept of “trust no user data,” meaning that anything sent from the user has no guarantee of being legitimate.

This is hard to stomach for a standalone developer.  An example of a classic theme from my current CMS project: I have a checkbox that, about half the time, needs to be disabled (grayed-out) and checked at the same time.  Intuitively, a grayed-out option should be untouchable, but a web client (or a simulated web client) could just as well don a pegleg and eyepatch and uncheck the box just the same.

One might say it takes enough knowledge to pull this off that the pesky user may merit the privileges of unchecking the box.  This is called “security through obscurity,” and it’s a dumb idea used fairly often.

Doing things the Right Way takes a lot more thinking than actual effort.  You have to run scenarios like “what if they hand-type the link for a tab that is hidden to them?”  Once you figure out the scenario, it’s a two-line “if” statement to knock out the capability.