Programmatically authenticate against Apache CXF Fediz with ADFS Token

Powershell, Windows

A couple of weeks ago we had to interface with an application running on Tomcat using Apache CXF Fediz as it’s authentication mechanism. We had successfully tied the application to work with our ADFS 3.0 server using SAML 1 tokens. While this worked wonderfully for users using web browsers we had problems getting it to work programmatically with Powershell. This was needed for some API calls and we had to authenticate with ADFS first.

So below you will find the script we used along with it’s description, I have actually posted two scripts one where you obtain an initial cookie from the application as this was a requirements and a second one where an initial coockie is not neeed. If you get the message “HTTP Status 408 – The time allowed for the login process has been exceeded. If you wish to continue you must either click back twice and re-click the link you requested or close and re-open your browser” then you need to use the cookie method.

So how does the script work:

  • First it obtains the needed cookie from the Apache application and stores it in a web session
  • Then it creates the envelope for the soap call to the ADFS server, we are requesting an “urn:oasis:names:tc:SAML:1.0:assertion” but you can request an “urn:oasis:names:tc:SAML:2.0:assertion” if need be.
  • It then makes a post request to the ADFS server with the envelope in the body
  • Once it receive the reply we need to clean it as we only require the body section example of the result:
  • The script then loads the result into a hashtable
  • It then makes a post request with the hashtable in the body to the Apache application using the Websession we initially established
  • Once that is complete we can use the web session to make any api calls we like eg(getting a status)

 

For application requiring an initial cookie:

For application not requiring to obtain an initial cookie:

 

Cooler Master Aquagate Max – Pump Replacement

Cooling

For those of you lucky enough to have brought a water-cooling kit back in the day (2008 :D) when there where just a few choices and prices where very high you probably heard of Cooler Master Aquagate Max. Or if you where lucky enough you actually got one of these bad boys:

cooler_master1

Well after 8 years of continues operation cooling a Q9550 + GTX 260 and later on an i7 4790K + GTX 970 the water pump has finally given up and called it quits. I have to say i was very impressed with this product and regard it as one of the best pc purchases i have ever made. Lets face it 8 years of operation and being probably the only part i haven’t upgraded over the years it has stood the test of time and it’s a real shame it’s was discontinued. I never imagined it would last this long and by now i expected it to either leak and destroy my pc or brake.

So i had two options either buy a new kit or replace the pump, i decided to go with option two (because i am sentimental). In this article i will go over what you need while the chances of this article actually helping someone are very remote some might find it interesting.

jingway-dp-600So obviously the best option is to buy a direct replacement, the original kit is powered by a S-Type Jingway DP-600 which delivers 520L/H which is very quiet and long lasting :D. The good news is the company still operates http://www.jingway.com.tw/en/products.html and you can buy the pump, now me being me i couldn’t wait for the shipping from china given my computer was out of action and decided to do this the hard way and get a different pump.

 

phobya-dc12-400

 

After some searching i found out that the Phobya series are the best replacement and there is a good reason for this, it seems it’s either a sister company or they have brought the designs from jingway. Now if you want a perfect fit go for the Phobya DC12-220 (400 l/h), which will fit nicely in the gap i however decided to go for the more powerful model the Phobya DC12-400 (800 l/h). One note to make is that i am not entirely happy with the Phobya DC12-400, it does cause a few vibrations and being in the metal case produces a lot of noise at 100% power, so much so that i decided to plug it into the mother board and run it only at 50%. At this speed you can’t hear the pump at all and still keeps the temperature quite low. One very important note you will also need to purchase two G1/2″ to 3/8″ Barb Fitting, do not make the mistake of getting G1/4, while the tubing on the outside is 1/4 the tubes used inside the box are actually 1/2. Don’t worry if you make that mistake as i did 😀 you can stretch a 1/4 tube and get it to fit as show later in the photos, i was lucky enough to have one spare G1/2″ to 3/8″ Barb Fitting so only need to stretch one tube the clear one.

Before:

cooler-master2

After:

cooler-master3

 

 

Renew Deleted Expired Certificate For Windows Service Bus

Windows

Renewing a expired certificate for a windows service bus is quite simple and the process is documented on msdn.

1. Stop-SBFarm on one of the nodes in the farm.
2. Install a new certificate on all Service Bus machines.
3. Set-SBCertificate – FarmCertificateThumbprint: Thumbprint of the new farm certificate – SkipKeyReEncryption
4. Update-SBHost cmdlet on all farm nodes.
5. Set-SBNamespace – Name namespace – PrimarySymmetricKey: service namespace key.
6.Call the Start-SBFarm cmdlet on one of the farm nodes.

However if the expired certificate has been deleted, you will run into issues running any command against the servicebus.

You will most likely receive the following error:

Certificate requested with thumbprint not found in the certificate store

certerror

I have seen various methods to resolve this by editing the registry and removing entries from SQL or re implementing your service bus, but a cleaner method is to simply restore your expired cert so it can be renewed.

1. Logon to your certificate authority
2. Find the issued certificate request by using the filter by using the certificate hash field and enter the thumbprint for the expired certificate. (note: thumbprint format uses spaces)
3. Select the certificate and export as binary and save as using the .cer file extension
4. Copy the .cer file to your service bus server
5. Import the certificate to the local store
6. Open the certificate store, and view the properties of the imported certificate. select the details tab and note down the serial number
7. Open command prompt as administrator and run the following: certutil -repairstore my “serialnumber”
8. Open powershell as administrator and run the following: get-sbfarm
9. Run the following start-sbfarm
10. You now can follow the procedure to renew a expired certificate

AWS Solution Architect Professional Level Sample Exam Answers

AWS

AWS provides 6 preview questions to get a feel for the type of the questions presented in the exam, however they do not provide answers.

The sample questions can be downloaded from here:
https://d0.awsstatic.com/Train%20%26%20Cert/docs/AWS_certified_solutions_architect_professional_examsample.pdf

Answers as follows:

Question 1: A) Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Generate an EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server.

Question 2: C) Take hourly DB backups to Amazon S3, with transaction logs stored in S3 every 5 minutes.

Question 3: B) Register the application with a Web Identity Provider like Amazon, Google, or Facebook, create an IAM role for that provider, and set up permissions for the IAM role to allow S3 gets and DynamoDB puts. You serve your mobile application out of an S3 bucket enabled as a web site. Your client updates DynamoDB.

Question 4: D) Use Elastic Load Balancing to distribute traffic to a set of web servers. Configure the load balancer to perform TCP load balancing, use an AWS CloudHSM to perform the SSL transactions, and write your web server logs to an ephemeral volume that has been encrypted using a randomly generated AES key.

Question 5: D) Configure an SSL VPN solution in a public subnet of your VPC, then install and configure SSL VPN client software on all user computers. Create a private subnet in your VPC and place your application servers in it.

Question 6: B) Sync the application data to Amazon S3 starting a week before the migration, on Friday morning perform a final sync, and copy the entire data set to your AWS file server after the sync completes.

 

 

 

Azure NSG Ports/Rules for Hindsight outbound

Microsoft Azure

A few weeks ago we had a requirement to restrict the outbounds ports of HDinsight for security reasons, so this article is dedicated to that requirement. Before we begin Microsoft official position on this is:

Important: HDInsight doesn’t support restricting outbound traffic, only inbound traffic. When defining Network Security Group rules for the subnet that contains HDInsight, only use inbound rules.

So after reading the above (from: https://azure.microsoft.com/en-gb/documentation/articles/hdinsight-extend-hadoop-virtual-network/) we took it as a challenge to get this working, after much testing we managed to get all the required ports. We have tried by deploying multiple clusters and so far it all works and deploys correctly, a couple of notes:

  • The solution below is not 100% secure but it mitigates the risk by lowering the “attack” service to only the regional azure IPs.
  • We also needed to open port 80 to the Ubuntu website (91.189.88.0/21) as this is required by some of the Apache tests after deployment
  • While testing we noticed that the servers communicate with the management point over a random port, this port seemed to be in the same range as the dynamic Azure SQL ports of 11000-11999 and 14000-14999. However to be on the safe side we opened a larger range 10000-49151 as we can’t be 100% sure.
  • You will need to open multiple rules for each Azure Regional IP (i suggest you combine the ips to the second octed). The ip addresses can be found here: https://www.microsoft.com/en-gb/download/details.aspx?id=41653. You will also need to keep the ip addresses updated (A new xml file will be uploaded every Wednesday (Pacific Time) with the new planned IP address ranges. New IP address ranges will be effective on the following Monday (Pacific Time)).
  • This is all unofficial and while we have had no problems with multiple deployments i can’t give any guarantees.

Inbound Ports

NamePriorityActionSourceSource PortDestinationDestination PortProtocolDirectionDescription
Allow-HDinsight01-Inbound1001Allow168.61.49.99/32*Subnet Range443*InboundRequired for Hdinsight Healthchecks
Allow-HDinsight02-Inbound1002Allow23.99.5.239/32*Subnet Range443*InboundRequired for Hdinsight Healthchecks
Allow-HDinsight03-Inbound1003Allow168.61.48.131/32*Subnet Range443*InboundRequired for Hdinsight Healthchecks
Allow-HDinsight04-Inbound1004Allow138.91.141.162/32*Subnet Range443*InboundRequired for Hdinsight Healthchecks

Outbound Ports

NamePriorityActionSourceSource PortDestinationDestination PortProtocolDirectionDescription
Allow-HDInsightToUbuntu-Outbound2001AllowSubnet Range*91.189.88.0/2180TCPOutboundRequired for Hdinsight
Allow-HDinsight01-Outbound2002AllowSubnet Range*Azure Regional Range80TCPOutboundRequired for Hdinsight
Allow-HDinsight02-Outbound2003AllowSubnet Range*Azure Regional Range443TCPOutboundRequired for Hdinsight
Allow-HDinsight03-Outbound2004AllowSubnet Range*Azure Regional Range1433TCPOutboundRequired for Hdinsight
Allow-HDinsight04-Outbound2005AllowSubnet Range*Azure Regional Range10000-49151TCPOutboundRequired for Hdinsight

ADFS Claim Rules for Groups and Cross Forest

Windows

Here are some quick ADFS claim rules to get some specific requests. Remember to create the rules in order:

Case 1

Get the users group membership, including groups of groups and filter on for any group beginning with “Group-XX” then send as a role claim:

Rule 1

Rule 2

 

Case 2 (Update 13/09/2016 – Apologizes as i had uploaded the wrong rules initially, they are now correct)

Get the users Cross Forest Sec Group Membership (from TESTDOMAIN domain) Claim including groups of groups and filter on for any group beginning with “Group-XX” then send as a role claim.Before you set these rules remember to give the ADFS service account access to read foreign group membership of the domain you are quering as detailed here: https://social.technet.microsoft.com/Forums/windowsserver/en-US/bda33eb9-ff6e-4e79-967d-f5430ade7310/give-access-to-account-to-view-member-of-attribute-on-foreign-security-principal?forum=winserverDS

  • Replace TESTDOMAIN with the domain you are trying to query
  • Replace Group-XX with the beginning of the group/s you are looking for, it’s a regex expression and you can also customize it to your needs. Alternatively you can remove “,  Value =~ “(?i)^Group-XX” ” and that will list all groups.

Rule 1

Rule 2:

Rule 3:

Rule 4:

Rule 5:

 

 

Turn off ProtectedFromAccidentalDeletion From OU and All sub OUs

Windows

If you ever had the task to delete an OU which had Protected From Accidental Deletion enabled on all sub OU’s it can be a pain to manually unchecked for every single one.The easy fix is to run a command to turn off the feature for you on all sub OU’s. To do this we run the following powershell command, just replace the path to your OU and the server, leave the rest as it is:

 

Update Azure Automation PS Modules at once

Microsoft Azure

Update 19/02/2017

Microsoft has now introduced a new button which updates all modules for you, very easy as shown on the screenshot:

 

 

-== Outdated ==-

Now if any of you use Azure Automation you know that updating the Powershell Modules is a pain as they require dependency to be installed first.This can easily take you a whole day to do by hand. However there is a very easy way to this.

The way to update them all including dependency is to run the json template for the AzureRM module. You can find the deploy to azure button and the lates version here: https://www.powershellgallery.com/packages/AzureRM/. All you need to do is chose the subscription, resource and your automation account.

Please note that the file currently does not include all regions and as such you may need to do it manually as described below:

You can download the 2.0.1 json straight from here: https://devopsgallerystorage.blob.core.windows.net/armtemplates/AzureRM%5C2.0.1%5CRootTemplate.json

You may need to modify it as the file does not include all regions where automation is available and will fail. You can use the code below to replace the json Automation Account Location section before you run the template:

 

Configure GitLab SAML with ADFS 3.0

Windows

While setting up gitlab with ADFS 3.0 we noticed there is a couple of gotchas you need to watch out for:

  1. You need to set the NotBeforeSkew to something like 2 in ADFS
  2. You need to trasform the transient identifier in ADFS
  3. idp_cert_fingerprint is case sensitive and needs to be all in CAPS

To set it up follow the following instructions:

In gitlab you need to set the following config

  • Replace the https://gitlab.com with your gitlab address
  • Replace the https://adfs.com with your ADFS address
  • REplace the https://gitlab.local with what ever you like
  • Replace 35:FA:DD:CF:1E:8F:8B:E4:CA:E1:AE:2A:EF:70:95:D5:DC:5C:67:1B with the finger print of your signing certificate

 

For ADFS configure the following settings (Use the same address replacements as above):

gitlab1

gitlab2

gitlab3

gitlab4gitlab5 gitlab6

Then Run the following command to set the skew in Powershell on the ADFS server:

 

Custom Script extension for ARM VMs in Azure

Microsoft Azure

Updated 08/01/2017

Due to the lack of articles regarding this topic i decided to do a quick post on how to get the Custom Script extension to work correctly on both Linux and Windows ARM (Resource Manager) virtual machines.

Some very important notes and key differences before we get started

  • For Windows machines
    • The extension details are:
      • $ExtensionType = ‘CustomScriptExtension’
      • $Publisher = ‘Microsoft.Compute’
      • $Version = ‘1.8’
    • When entering the commandToExecute note that in Windows the command is executed from the root of the container. This means that if your script in located in “\scripts\version1\my.ps1” on the blob storage container to run the powershell script you need to reference the full path like shown below. This is because when the agent downloads the files it creates the folder structure the same way as the blob (Do not put the container name in the path!):
    • If you get the directory path wrong this will be indicated by an error in the logs located at “C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Status”. The error will be something like:
    • The download directory is located in “C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads”
    • Log file are in “C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension\1.8”. Be aware they are a bit generic for more details errors including errors generated by your script inside the machine by not using throw please check: “C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Status”
    • If you are using named parameters rather than arguments in your script “eg. -MemoryinGB 100” you need to use the -Command parameter rather than -File, like:
    • If you want your script to tell Azure automation that an error has occurred that’s meaningful make sure you use “Throw” in your script (the one that runs on the machine) as the exit, if you want to catch all messages regardless if they use throw make sure you use try,catch and finally as shown below, this will give you the error messages from “C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Status” 
    • To get the status you can use the command below. Just specify the virtual machine name, it’s resource group and the name you gave to your custom extension:
  • For all Machines
    • Any files downlaoded are not removed after script has finished executing
    • You can use the “Get-AzureServiceAvailableExtension” to get all the extensions and there current version
    • If you are using Azure Automation scripts and you would like them to fail correctly make sure you add the line below at the top of your code :
    • You can only run a script with the same parameters on a virtual machine once. If yo want to run it multiple times on the same VM you can specify a time stamp in the Setting or SettingString of Set-AzureRmVMExtension
    • If you are using hashtables (and i recommend them) note that a certain format is expected for the fileUris in the Setting of Set-AzureRmVMExtension. Since we can get the extension to download multiple files for us we need to follow the following format:
      • For a Single File
      • For Multiple Files
    • If you want to execute commands with sensitive variables like passwords you can move the commandToExecute to ProtectedSettings or ProtectedSettingString in Set-AzureRmVMExtension. Make sure you only have it in one place (Setting or ProtectedSetting).
  • For Linux Machines
    • The extension details are:
      • $ExtensionType = ‘CustomScriptForLinux’
      • $Publisher = ‘Microsoft.OSTCExtensions’
      • $Version = ‘1.5’
    • When entering the commandToExecute note that in Linux the command is executed from the same folder where the script is located. This is because all files are downloaded to the same folder and blob folder structure is ignored. This means that if your script in located in “\scripts\version1\” on the blob storage container to run the sh script you need to ignore the structure and only specify the file name like:
    • The download directory is located in “/var/lib/waagent/Microsoft.OSTCExtensions.CustomScriptForLinux-1.5.2.0/download/”
    • Log file are in “/var/log/azure/Microsoft.OSTCExtensions.CustomScriptForLinux/1.5.2.0”
    • If you have your own DNS servers and you haven’t set it up to forward for Azure dns queries you might get an error. If you run “hostname -f” and you get errors, you can tell the custom script to skip dns check with the code below in the Setting or SettingString of Set-AzureRmVMExtension. Note that at this stage the script wants a bool variable, looking at the code future versions will take a string.
    • To get the status you can use the command below. Just specify the virtual machine name, it’s resource group and the name you gave to your custom extension:
  • Two important Notes:
    • You can only have one custom extension at a time so you can remove it after you run your script or before with a code similar to the one below:
    • Currently, if the extension is installed it will be re-run every time the VM is Delocated and Started again, to avoid this remove the extension  with a code similar to the one below:

 

So let’s see some scripts the first is a Windows script which create a dns entry on a Domain Controller:

And here is a Linux script which join a Linux machine to a domain, however this time we don’t want the execution of the script to log the variables we pass to it so we move it to the ProtectedSettings to be encrypted: