F5 303 Study Guide – Part 1

F5 303 Study Guide – Part 1

Objective 1

In recent years, we’ve seen a lot of attacks on web applications, compromising a lot of data including PII, PHI and username/password combinations, which become feeds for phishing and other attacks. Check out the article I put together on what is a waf and why should I have one to look into some specifics, but I think today its not a question of will we be attacked with a web presence, to when have we been attacked, and were they successful?

It used to be that web attacks were to hack into content, maybe gather some basic data, or *gasp* deface a website with some kewl l33t hax0r speak! Nowadays, that is pretty much gone. Sophisticated attacks at the application layer, hidden by DDoS floods, with some threats and blackmail via email thrown in for good measure is the new normal.

Having said that, what are the attacks that I as a security engineer deploying advanced security need to focus on. The OWASP Top 10 is the starting point for this, but staying on top of security using things such as the CERT mailing list and any OS and application specific lists should also be watched for emerging threats that haven’t made it to the top 10 yet.

One by one, the 2013 OWASP top ten, which as of 2015, are what you should look at for the F5 303 ASM exam(Memorize this list, and *understand* each of them):

https://www.owasp.org/index.php/Top_10_2013-Top_10

This is a great high level list, if you want to go deeper into any of these, let me know in the comments and I can do a post on specific attacks.

The F5 ASM WAF has the ability to mitigate these within the policies and is one of the main functions within your application delivery network. Many of the top 10 are around validating input that a web application is accepting. Using a negative security model, you can utilize signatures that look for specific strings, such as javascript functions that should not be in your parameters, and stop them before the application server ever even sees the data. The signatures can also stop other attacks, such as known attacks against misconfigured and outdated server software, again, blocking the attack before the server even sees the attack.

Inherent to the operation of the ASM, a session is created for each user. Doing this blocks plenty of attacks that try to replay a legitimate user, or use their URL or session ID to login an attacker as the user. These session cookies also work in tandem with things such as creating tokens that limit a user’s ability to click a link on phishing site while logged into a legitimate site, and perform a function, as in a CSRF attack.

Not only checking request data, but the ASM can scan respond data as well. In doing so, it can do things such as scrubbing of potential PII/PHI/PCI data such as Social Security Numbers & Credit Card data. In the event that an attacker has figured out a way to display bad data, the ASM can still protect against that data being sent out into the hands of the bad guys.

An extremely useful feature in the ASM is integration with many dynamic application security test suites. What a DAST suite does is probe your website for potential hacks and vulnerabilities, by for instance, checking to see if a form is validated when submitted to the site. If it finds that you can send that data and potentially open up a security hole in your website, the DAST tool will note this in a report so you can fix it. The ASM can take many of these reports and either automatically, or manually import the results in and create a signature set to enable in your policy and protect you against the specific attacks. I look at this as the 80/20 rule of protecting your website – and it might be more like 95/5 — once you block 80% of the attacks, you can really hone in on those last few, but know you’re already doing a pretty good job in securing your site. This type of protection is sometimes described as virtual patching, as it is virtually patched within the ASM until you can update your application servers to do the same.

When building a policy, theres a fine line of how granular you can get, and how secure you can be. One could argue that the best policy is the positive security model – that is only allowing the specific set of requests that are known as good. At a high level you would allow every URI, every HTTP header, every cookie, every parameter name, and the corresponding values… You can imagine this could get a bit cumbersome, especially if you have 500 or 5,000 products on your site with 20 different options, and the price of those products changing every day!

Lucky for us, normally we don’t need to say that the shirt should only be plaid, gingham or windowpane, but instead can say “only allow A-Z and 0-9. A username might be allowed single ticks such as O’Hare but probably not !’s or #’s, while a password might. We can specify all of these things to each object, or perhaps create wildcard entries with a standard setting, and then drill down to the outliers by adding them as anomalies to the policy.

Working with the developers, and having them understand this from the beginning is a great boon to your security posture as a whole, and one of the best ways to deploy a security policy is by building what you and your developers think is the proper policy, and then running their regression testing through a development or QA device. While doing this, you can turn on the automatic policy builder, or just the manual traffic learning (Learn checkbox) and see what shows up in your policy. You might find out that usernames can contain all different types of characters that you never thought of, such as umlauts and cedillas, which you’ll be able to easily add to your policy.

Working with the developers, you can work through this exercise each time they deploy new code or add new features to make sure you don’t miss any changes they’ve made.

Oftentimes, you might think that this is overkill for a given application. Let’s say you’ve deployed a standard application written by any number of major vendors, or perhaps its mostly static content that you want to protect against misconfiguration, but don’t expect to have the same customizations as a one-off homegrown environment built in-house. Using the rapid deployment policy, you build a semi-cutsom policy from day one where you do things such as HTTP and cookie compliance checks, valid request and response methods, and then fine tune it as you see traffic coming through using the manual traffic learning feature. By default, you will have your policy in transparent mode, which means that you can have log messages written, but no traffic will actually be impacted, similar to an IDS as opposed to an IPS.

One Reply to “F5 303 Study Guide – Part 1”

  1. Hi Mate,

    Somewhere around the corner of the internet i stumbled upon this post. You are doing a good job by providing review of your exam in details. This is changing lives, helping people in many way. I hope you are having a wonderful day.

Leave a Reply

Your email address will not be published. Required fields are marked *