Skip to main content

Don't Trust Users: Application Security Basics

DATE: May 11, 2021

TAGS:

When it comes to application security, it pays to be a cynic. When developing a tool, it is quite easy to focus on all of the things it is supposed to accomplish. The converse is of similar importance: we need to think about the things it should not be able to do. An arbitrary user from the public internet should not be able to view our internal data. A standard user should not be able to modify administrator settings. A user should not be able to change the personal profile information of a different user. Of course, each of these seems quite obvious. However, we find that there are a multitude of non-obvious situations that can lead to similar security vulnerabilities. We want to regularly ask “Should I trust the user with this?” Oftentimes, we will find that the answer is “No.”

For the purposes of this article, let us imagine we have a blog application. Authors are able to log in to see their existing articles, draft new ones, and submit them for approval to managers. Managers are able to see all of the articles in the system, as well as grant new authors access to the system. Visitors to the site can of course read any articles that have been published. Our aim will be to use this site to illustrate several vulnerability considerations.

First, we ought to be cautious of which information we share with a given user. For our hypothetical site, think about login messages and related account retrieval messaging for authors and managers. The login form requests a username and a password. In some applications, after submitting this form, you may receive a message, “No account with this username was found.” Is this information that we should have shared with the user? If this is a malicious individual seeking access to our system, we have helped them in their mission. They can try various usernames repeatedly. When the message changes to “the password was incorrect,” they know they have a valid username. They can now confidently move on to the task of cracking the password. You may notice that many sites these days have a simple “login failed” message regardless of which piece was incorrect. On our site, the names of authors will appear in the byline for the articles. Their usernames however should remain private. This goes doubly for the usernames of managers or administrators. (Bonus advice: do not have your administrator account be named ‘admin’!)

Similar problems can arise with a forgotten password process. We may have one message, “this username was not found.” A successful username may instead return “we have sent a recovery link to your email address.” Again, we give a malicious individual an opportunity to find valid usernames. In this scenario, we may also see an email address exposed: “has been sent to yours@email.com”. Should we give that information out to someone just because they knew a username in the system? Now, they may be able to target that individual and gain access to their email account, which in turn will get them into our application. For the authors, this may not be a vulnerability, as their contact information may appear in the articles. For managers, we likely should not share such information.

This same idea applies to authenticated users. In our application, an author is able to modify any of the articles they have written. Now, they modify the URL to one of their articles and start trying other article IDs. If the two display options are “no article exists with this ID” and “you do not have access to this article”, they can identify a valid article and begin trying to access it in other ways. Note that, were our system to allow authors to see all articles by others but only modify their own, this would not be an issue. This means the questions we ask to guide our security vary by system. Should outside individuals know the logins of our users? Should our users be able to see records tied to other users? If not, we should make sure never to reveal such information to them.

These items all fall into a rather mild category of vulnerability, sharing information with users that may potentially make a breach easier to achieve. Our distrust of users must also extend to the input they send to our application. Broadly speaking, we refer to this category as input validation. We need to verify that what we receive is the sort of thing we were expecting and check that there is nothing suspicious included. Take, for instance, a newsletter sign-up form that includes a field to record a phone number (Our marketing team appreciates the old ways). If we are dealing exclusively in the USA, we would expect this to be 10 digits. Should we allow someone to submit something longer than 10 characters? Should we accept an input that includes letters or other symbols? This example is not directly related to security, but it does help us demonstrate the mindset we should have in assuming a security posture: we must determine the allowable forms of data input and enforce rules so only such forms are accepted.

Injection attacks are a prime example of how this can result in real vulnerabilities. If we have a search function to find an article by its numeric ID, should the input allow spaces and letters? Should the form a manager uses to create a new author allow ampersands and semicolons in a name field? Injection attacks do just this to access restricted information or even make changes to the system. The classic xkcd comic Exploits of a Mom makes a joke surrounding such a SQL injection attack.  In this story, a ‘name’ being added to a database is written in such a way to execute a command in the system. In this case, we should not trust that a user's input is safe for us to process. Perhaps we would disallow characters that have no purpose in this particular field, such as semicolons and parentheses. We may also choose to parameterize the input, being sure to process the whole input as a string with no chance of it being treated as a command in the query.

Cross-site scripting, or XSS, is another problem we may face for similar reasons. Say we have a comment section attached to each article. A clever and malicious user writes a comment. Instead of normal text, they write HTML and javascript code. We save this comment into our database where it is relatively harmless. However, the next time a user loads up that particular article, the malicious comment is rendered on the page, the javascript is executed, and they are now a victim to whatever goal the attacker was seeking to achieve. In the same way, we asked if the input was safe for our database, we must ask if it is trusted enough to show to users. Disallowing symbols needed in HTML and javascript would be one possible solution, as would forcing the comment to be displayed as text, rather than processed as HTML.

Now that we only share information with those we are certain to deserve to have it, and we verify that what they send us is trustworthy, we need to remain vigilant. Many of our security concerns may have multiple steps, and we must be suspicious of each of them. Let us imagine once more the author who can only see their own articles. We decided they only should have access to the records belonging to them so that page displays only links to their articles. We may naively think that we have prevented them from accessing any other records. Of course, as mentioned above, they may try to navigate to other articles without clicking on a link by changing the URL to a different article ID. If we have a proper security posture, we will check if the user is permitted to access this particular record, even if they normally should never make it to this function.

The same concept applies to restricted functions based on permissions. Perhaps we rightly limited the Publish Article function to Managers only. If a normal author tries to go to the Publish Article page, they are directed elsewhere and warned of an error. We have successfully secured step one of the publish process. However, this has not protected the second step where the Publish form is POSTed to the server. If that function does not check whether the current user is a Manager, then anyone able to artificially construct an HTTP POST request may be able to modify our records. This fault is known as ‘incomplete mediation’, where we have not maintained integrity at each step of the process. We should also apply this idea to the input validation mentioned previously. Our comment form will not allow you to click the submit button if suspicious characters are found in the name field. However, if we do not also check for these characters in the POST function, the same artificial HTTP request may cause us the problems described above.

When developing applications, we must not trust our users. By that, we mean that we should assume malicious actors, and our design should minimize or eliminate any opportunities for them to achieve their aims. Our guiding principles are to share only information we are certain a user needs, to verify the validity and safety of anything a user sends to our application, and to maintain this level of scrutiny at each phase of a function or process. It is certainly the case that there is a world of depth in how we go about this. Ultimately, it all comes back to this basis: if we want to protect our users, we start by not trusting them.

About the author

Micah Snabl

Application Developer

(260) 224-7473, msnabl@purdue.edu

Sign up for the monthly newsletter

Return to main content

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2021 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by Technical Assistance Program

Trouble with this page? Disability-related accessibility issue? Please contact Technical Assistance Program at tap@purdue.edu.