Prepopulate APM Username From O365 Login

emojiF5’s Access Policy Manager (APM) offers many compelling security and optimization features for Microsoft’s Active Directory Federation Services (ADFS) – more info in the APM/ADFS Deployment Guide.  Odds are if you’re deploying ADFS you’re also using or looking at Office 365.  Normally, when you login to O365 you will be redirect to a Home Realm Discovery page,  This page does 2 things:

  1. determines if you’re a federated users (i.e. you use another authentication source like F5, MS ADFS, Okta, …. <insert long list of IDMs on the market today>).
  2. if you’re not a federated user, authenticate you against Azure AD.
Office 365 Empty Login Page

So when you type in your email address Microsoft parses the domain name and automatically redirects you to your local APM/ADFS environment for authentication – pretty nice! However, when you land at the APM login page you notice the username field is empty and you have to enter your username or email address (depending on how APM is configured for authentication) again.

You can fix this by customizing the APM Visual Policy Editor to look for O365 to ADFS logins and prepopulate the username.

Username Prepopulated from O365

So what does the APM configuration look like?

Final VPE

Step 1: Create an object to detect O365 to ADFS Login 

  1. Click the (+) button on the Start fallback branch
  2. In the search box type empty and select the Empty resource then click “Add Item”
  3.  Name the resource O365 Login
  4. Click on the “Branch Rules” tab and click “Add Branch Rule”
  5. Name the branch “Yes” and add the following expression via the advance tab
    • expr {[mcget {session.server.landinguri}] contains “username=”}
  6. Click Save


Detect O365 Login


Step 2: Extract the Username from the O365 Redirect Request

When the user enters their email address into the O365 login this data is included in the GET request back to the APM/ADFS environment.

O365 Login with Federation Redirect
O365 Redirecting to APM/ADFS

To capture this data we’ll need to create perform the following steps:

  1. Click the (+) button on the Yes branch off of O365 Login
  2. In the search box type variable and select the Variable Assign resource then click “Add Item”
  3. Name the resource “set username from O365” and add the following variables

session.logon.last.username = expr {[string range [mcget {session.server.landinguri}] [mcget {session.custom.pos_username}] [mcget {session.custom.pos_at_sign}]] }

session.custom.pos_at_sign = expr {[string first “%40” [mcget {session.server.landinguri}]] – 1}

session.custom.pos_username = expr {[string first “username=” [mcget {session.server.landinguri}]] + 9}

Click “Finished” and then click “Save”

Set Custom Variables

Step 3: Customize the APM Logon Page

Now that we have the username we need to prepopulate it on the APM Logon page and set the form variable to read only.

  1. Click the (+) button on the “set username from O365” fallback branch
  2. Select “Logon Page’ then click “Add Item”
  3. Name the resource “O365 Logon Page”
  4. Change the “Read Only” setting for field 1 to “Yes”
  5. Click  “Save”
APM Logon Page

The remainder of the VPE configurations are basic and are not covered in this article to keep it from being a novel.

That’s it!  Now you’re user’s only have to enter their information once which will definitely make them happier.

Easily Copy an ISO to Multiple BIG-IPs

With 12.1 dropping yesterday I have multiple BIG-IPs I need to upgrade in my lab environment.  In the lab we have a CIFS share that stores the ISOs so I can upload the 12.1 ISO to each F5 from that filer with the following command.

mkdir /tmp/iso
mount -t cifs -o username=user,password=user // /tmp/iso/
rsync --progress -a /tmp/iso/F5/12/BIGIP- /shared/images/

You could also remove the BIGIP- on the rsync command and it would copy all ISOs to the BIG-IP.  This could easily be added to a crontab to make your life easier.

The Burden of Federated Authentication

Authentication vs Authorization
Authentication vs Authorization

If you’ve ever had the pleasure to hear me rant on web access management then you know I like to stress the difference between authentication and authorization.  Authentication is the process of verifying a user’s identity while authorization is the process of determining the level of access the user possesses for any given application and/or resource; one does not imply the other. So why bring up a topic that has been discussed to death in many forums around the Internet? Federated authentication…

In my years as a developer and sys admin it was common place to either write or include a publicly available authentication framework and based upon group membership allow or disallow access to certain application functionality. These frameworks made it easy to quickly deploy applications without the need to “recreate the wheel” over and over again. However, what we gained in deployment speed we lost in code maintenance and software patching over time.  This ultimately lead to security issues, such as phishing attacks, because the end-user had:

  1. too many usernames/passwords
  2. too many application login points

Ultimately the user could no longer, well didn’t care to, keep track of these and resorted to less secure mechanisms for storing passwords as well as entering their extensive list of passwords on any site that resembled one of our applications.

So in early 2005 we started to use Shibboleth and Jasig Central Authentication Services (CAS) for federated authentication and single sing-on. Shibboleth and CAS addressed my issues by:

  1. reducing the number of username/password combinations as well as login entry points to 1
  2. allowed non-employees access to our websites without the need to maintain their identity in our authentication database – commonly know as federated authentication

While the paradigm of federated authentication caught on in higher education over a decade ago enterprise environments were slow to adopt until now. With the explosion of Software-as-a-Service offerings, such as Salesforce and Office 365, enterprises are quickly deploying federated authentication services with little to no understanding of what the snake oil, or IAM, vendor has sold them.

Too often, I sit in meetings regarding issues a customer is having with their IAM solution because of two issues:

  1. Customer did not understand the difference between authentication and authorization
  2. IAM vendor promised that multi-factor authentication capabilities would integration easily with <insert other vendor>.

So lets take a look at these issues.

Federated authentication protocols, like SAML, have made it easier for users to consume and/or modify data inside web applications without the need to maintain a local persona of that user. Now from an application perspective this feature might have marginal gain but from a security perspective it allows you to eliminate a substantial amount of risk. The concern I stress to my customers is that while federated authentication reduces the risk of managing and maintaining that user’s persona it does not alleviate the risk of unauthorized access. This is because authentication in a federated world does not imply authorization.

So what are the challenges of this in a federated application? Lets assume we have a user Alice accessing App1. First, App1 has the responsibility of consuming Alice’s authentication assertion, sent by the SAML IdP, as well as authorizing that Alice has access to the application. From a security point of view this can be very dangerous.

  • What if Alice’s credentials were stollen and the malicious actor now has access to sensitive information.
  • What if App1 has not been patched in several years, which never happens in the real world, and is vulnerable to authentication and/or authorization attacks.

Ideally we want to remove the initial authorization functionality from the web application like we have done for authentication. This can be achieved by leveraging a web access management solution that also operates as an authentication proxy. So what is web access management (WAM)? It’s a proxy that controls access to web applications based upon contextual authentication and provides a least privilege model for authorization.

So back to our concern:

  • What if Alice’s credentials are compromised?
    • The WAM can look at the full content of Alice’s connection and request to App1.  If the WAM notices something out of the ordinary, say Alice normally access App1 from within the USA but this request is coming from China, then the WAM could request that the IdP perform multi-factor authentication.
  • What if App1 may have known vulnerabilities and exploits?
    • The WAM only allows authenticated and authorized users access to App1 so we can reduce our threat vectors; typical in this use case we would combine a web authentication management and web application firewall together to fully alleviate this risk.

Okay, so if your still reading then what is my point – that with the growing adoption of federated authentication in enterprises we are relying too heavily upon identity providers to secure access to our applications. Only a handful of Identity vendors on the market provide both authentication and web access management capabilities. The vendors that do not posses WAM functionality leave the authorization to the application, which in our current security landscape can be a risky bet.

Now, unless you’ve been under a rock for the past 5 years you know I’m a big advocate of F5’s Access Policy Manager. APM is the tool I use over and over again to help my customers resolve their federated authentication burdens. Oh, and did I mention APM works with every major MFA vendor on the market! So you can easily add 2nd factor authentication to services such as Office 365 – even free solutions like Google Authenticator.

Kerberos is Easy – Part 1

191432700_dd8a10fa7e_mI’m about to say two words that bring tears and frustration to most application developers and administrators alike.  Are you ready?  Okay…. Kerberos authentication.  There, I said it and if that was not bad enough I’m going to say another phrase that will cause rage and a fit that makes Lewis Black look calm…. Kerberos is easy.  That boom you hear was me dropping the mic, yea I said Kerberos was easy.  So if you’re still reading you are more than likely saying this guy is full of $%*! and you’d be somewhat right but that’s a topic for another time.  Now, the reason I say Kerberos is easy is because most problems can usually be traced down to a fuzzy understanding on how APM authentication and Single Sign-On (SSO) work or the following common Kerberos issues:

  • Service Principal Name (SPN) issues
  • DNS issues
  • Time issues
In this blog series we’ll examine how F5 APM and Kerberos work together as well as how to troubleshoot the three most common issues that plague Kerberos implementations.  So lets get started.

APM and Kerberos

In regards to Kerberos and F5 Access Policy Manager (APM) the below information and advice will save you a lot of time and hopefully some hair; for me it’s too late… Kerberos took the best of me a long time ago.

  1. Client authentication is completely separate from server authentication
  2. For the love of all that is holy please don’t troubleshoot client authentication and server authentication issues at the same time

Authentication with a Full Proxy

So if you’ve every seen me present I typically pound the drum over and over again about how F5 is a “dual-stack full proxy” which is fancy marketing speak for two completely separate TCP stacks.  What I mean by this is the client has a completely separate TCP connection from the server. This gives the F5 a lot of power but it also confuses many administrators that are new to the BIG-IP platform.

Full proxy

So for authentication this means that auth requests to a website do not just “pass-through” the F5 to the server.  The user authenticates to the F5 and then the F5 authenticates to the server. When we’re dealing with Kerberos this means that Kerberos can refer to two authentication options:

  • client authentication to the F5
  • server authentication from the F5 using Kerberos Constrained Delegation

Now we typically refer to the client side as AAA authentication and the server side as Single Sign On. If you’re new to this I high recommend you checkout Brett Smith’s Single Sign On (SSO) using Kerberos post on DevCentral.  This will walk you through how to configure the APM AAA object and how to configure the Kerberos SSO object.

Break The Issue Down

Once you’ve setup the Kerberos AAA authentication and the Kerberos SSO you’ll typically fire up a browser and try to authenticate against APM and expect to see your protected website.  Now, if you one of the lucky few this will just work and fireworks will start going off in the distance as a parade in your honor proceeds past the water cooler.  For the rest of us this will fail in epic fashion and we’ll start channeling Lewis Black again as we yell profanity at our computer; I promise you that 1/100 times this will make things work so I strongly encourage this practice.  At this point take a deep breath and start breaking the problem into smaller size bites that we can address.

1. Did APM authenticate the user correctly?

If you look under Managed Sessions in the APM menu and you see a green ball with your username in the table then Kerberos AAA is working.  If not, we now know what to focus on and we can start troubleshooting this aspect of the APM policy without worrying about the Kerberos SSO – which you should ignore until this step works.

2. Did your browser present a pop-up asking for username/password

This is typically a tell-tell sign that Kerberos SSO did not work.  Now, you can troubleshoot this with your APM policy as is but I like to build a separate APM policy that only deals with Kerberos SSO and does not perform client authentication. This allows me to quickly progress through multiple testing iterations without constantly logging in (keep in mind that Kerberos SSO can be used with any form of client authentication from SAML to Forms Auth to Basic).  I credit Kevin Stewart for this trick and it has greatly reduced the time it takes for me to solve Kerberos issues.   So what does this policy look like?

KCD-VPE variable-assign

Pretty simple, you only need a Variable Assign and set the APM session variables that the Kerberos SSO configuration is looking for:

  • session.logon.last.domain
  • session.sso.token.last.username

Up Next

In the next post we’ll examine the troubleshooting steps you’d take to troubleshoot Kerberos Authentication and then we’ll tackle Kerberos SSO.

Featured image courtesy of renee.

How to Monitor a TLS 1.0 Application


With the slew of SSL and TLS based vulnerabilities over the last two years F5 administrators have been forced to become more cognizant of the encryption standards used in their environment.  While disabling SSLv3 and TLSv1 is a critical step in securing your infrastructure you may find yourself stuck with applications servers that only support TLSv1 or weaker protocols.

HTTPS monitors in TMOS always default to the latest protocol version supported by OpenSSL but when you upgrade to 11.5.0 and higher the HTTPS monitors will not utilize SSLv3 or TLSv1.  If you’re stuck with application servers that require TLSv1 this puts you in a sticky situation.  Now I don’t know the dynamics of your organization and while upgrading the application server to support a more secure protocol is the ideal way to solve this issue it might not be feasible you for.  For those customers the process below outlines the process of creating an external monitor that uses TLSv1 to perform health checks.

Note: this article is based upon TMOS 11.5 and higher.  If you’re running another version this process would still work but the step by step instruction may differ for your configuration.

Create an Monitor Script

Grab the HTTP monitor script from CodeShare on DevCentral and modify the curl statement on line 48 from:

curl -fNs http://${IP}:${PORT}${URI} | grep -i "${RECV}" 2>&1 > /dev/null


curl -NksSf --tlsv1 https://${IP}:${PORT}${URI} | grep -i "${RECV}" 2>&1 > /dev/null

Save the modified script to your desktop and upload the script to your BIG-IP through the File Management options under the System menu.

Note: I named my monitor tlsv1_monitor which will be referenced throughout this document.

Create an External Monitor

Now we can create an external monitor based upon our monitor script.

  1. In the left hand menu click Local Traffic -> Monitors
  2. Click the create button in the top right corner
  3. Name your monitor (remember you can’t use spaces in the name)
  4. Select External for the type
  5. Select your external monitor for the External Program
  6. Click Finished


Test Your External Monitor

Now that you’ve created an external monitor based upon your monitor script we need to test it. You could go for broke and assign it to a pool but I prefer to know things are working as intended and not because i goofed up somewhere!

When you upload your monitor script TMOS stores it in the filestore.  So to test this script we’ll need to SSH into the BIG-IP and access the BASH console.

Initial Test

Note: TMOS adds a unique identifier to the script name. So your script name will be different than the example below.  You’ll also need to enter your own IP address and port for the two script arguments.

  1. cd /config/filestore/files_d/Common_d/external_monitor_d
  2. ./\:Common\:tlsv1_monitor_386911_1 443

If everything works the script should return UP.


Now that our script is working we need to verify that it’s actually using TLSv1.  To determine this we’ll take a tcpdump while issuing the command above and then verify the protocol with the ssldump command.

Note: You’ll need to modify the IP address and TCP port to match your environment.

Type the following commands on the CLI:

  1. tcpdump -vvv -s0 -nni External -w /var/tmp/tlsv1.cap host and port 443
  2. ctrl+z
  3. bg
  4. ./\:Common\:tlsv1_monitor_386911_1 443
  5. fg
  6. ctrl+c

This will start a TCP dump and then send the process to the background (ctrl+z pauses and bg sends it to the background). Once the monitor command is executed the fg command will bring the tcpdump process back to the forground and ctrl+c will terminate the tcpdump.

Note: The tcpdump command will still display information to the CLI so you may have a hard time seeing what you’re typing.  My recommendation is to paste the monitor script command.


Once you have the tcpdump we can use ssldump to view the protocol used between the F5 and your server.  Issue the following commands at your CLI:

  1. ssldump -H -nr /var/tmp/tlsv1.cap | grep Version

This will display the SSL record messages and search for the Version used.  In our case we’re looking for Version 3.1:


Assign your External Monitor

Now that everything is working as intended you can assign the new external monitor to your application pool.  If you don’t know how to do this I highly recommend you checkout the DevCentral Whiteboard Wednesday session on monitors.

APM Troubleshooting with ADTest


When I first started working in IT it drove me crazy when users would verify if their Internet connection was working by opening a browser and try to get to Google.  Ideally they should have used ping and progressed through the process of pinging their gateway then their exit router and then a public DNS server to determine if their Internet connection was working – yea right!

Well, I have this same feeling when someone that’s new to APM configures AD authentication and then immediately opens a browser and tries to authenticate to an APM application only to find it doesn’t work.  The problem with this approach is that the APM login page will give you very little data as to why this did not work.  A better method would be to use a tool on the F5 that can test AD authentication and ensure that your network, DNS and firewall settings are all correct so you can ensure AD authentication stands a good chance at being successful.  Such a tool exists via the CLI and it’s called adtest.

I’ve mentioned adtest before in a few of my posts but I’d like to give you a little more insight into how I use this tool.  When working with Active Directory there are a few things you need to know regarding how APM performs authentication and data retrieval.  When using the Active Directory AAA object APM will use Kerberos for authentication and LDAP for AD Query processes.  APM authenticates the user via their credentials and not via the service account configured in the AD AAA object.  So for Kerberos authentication to work you must have DNS configured correctly on your BIG-IP and you need to ensure the BIG-IP can access port 88 on the Active Directory Domain Controllers.  Something to note, adtest does not use the data in your AD AAA object but instead these options are entered by you via the CLI when executing the adtest command.

So lets get down to it:

adtest -t auth -d 10 -r -u gututest

So what does this command do?

  • -t auth tells adtest that we’ll be authenticating against AD versus querying.
  • -d 10 sets the debug level to it’s highest setting so we can see all errors that may occur
  • -r defines the realm we’ll be authenticating against
  • -u defines the username we’re authenticating with

So what can go wrong when using this command?

1. DNS not configured or misconfigured

If adtest states that it can’t find the defined realm then either the BIG-IP DNS configuration is not correct or your DNS infrastructure does not have the correct service records to point Kerberos clients to the correct KDC.  If your DNS infrastructure is missing the correct service records you can use the -h tag to specify the Active Directory domain controllers name.

adtest -t auth -d 10 -h -r -u gututest

2. Firewall Issues

If the F5 is not on the same layer2 network as the preferred Active Directory Domain Controller then there is a good chance our Kerberos request will traverse through a firewall and/or IPS solution.  While adtest does not have specific error messages that would indicate a firewall issues exists you would probably start to guess this may be the issues once you’ve verified that your routing, DNS and test credentials are all correct.  A definitive way to prove this is take a tcpdump via the BIG-IP CLI.  An easy way to capture and review tcpdump on the BIG-IP CLI is with tshark (on TMOS 11.3 and higher).  The command below listens for all Kerberos traffic:

tshark -i 0.0 -d tcp.port==88,kerberos -R kerberos -nVXs0

Note: tshark will create temporary files in /tmp/ that will need to be delete once you’re done.  Otherwise you may fill up you BIG-IP disk and cause bigger problems than authentication not working

3. ADtest works but APM AD AAA Does Not

While this can be caused by many things typically I see it boil down to the following issues:

  1. APM AD AAA configuration does not match adtest CLI arguments
  2. APM AD AAA is using a pool of AD servers

Obviously the 1st issue can be fixed by retyping all of your APM AD AAA configuration object over again – I know this seems silly but it happens more than you’d think.  The 2nd issue stems from the fact that adtest may not perform authentication from the same Active Directory Domain Controllers as you’ve specified in your APM AD AAA configuration.  When adtest uses the -r flag it queries DNS to obtain a KDC server for authentication.  An easy way to ensure the Active Directory Domain Controllers you’ve selected work with APM is to perform an adtest against each domain controller using the -h flag:

adtest -t auth -d 10 -h -r -u gututest

adtest -t auth -d 10 -h -r -u gututest

adtest -t auth -d 10 -h -r -u gututest

I hope this post helps give you an idea of how I use this tool and I’ll keep updating this page as I find new ways to troubleshoot with adtest.