Month: October 2017

HTTP 400 IIS Token Bloat

An old “friend” keeps keeps showing up in different environments. So since my old Blog is gone, I will add this again.
But this time I’ll just copy the info from Microsoft’s website.

Thank you Microsoft:—bad-request-request-header-too-long-error-in-internet-info


HTTP 400 – Bad Request (Request Header too long)” error in Internet Information Services (IIS)


A domain user attempts to browse to a website hosted on Internet Information Services (IIS) 6.0 or higher by using Internet Explorer 6.0 or later.  The website is configured to use Kerberos authentication.  Instead of receiving the expected web page, the user is presented with an error message similar to the following:

HTTP 400 – Bad Request (Request header too long)



This issue may occur when the user is a member of many Active Directory user groups. When a user is a member of a large number of active directory groups the Kerberos authentication token for the user increases in size. The HTTP request that the user sends to the IIS server contains the Kerberos token in the WWW-Authenticate header, and the header size increases as the number of groups goes up.  If the HTTP header or packet size increases past the limits configured in IIS, IIS may reject the request and send this error as the response.



To work around this problem, choose one of the following options:

A) Decrease the number of Active Directory groups that the user is a member of.


B) Modify the MaxFieldLength and the MaxRequestBytes registry settings on the IIS server so the user’s request headers are not considered too long.  To determine the appropriate settings for the MaxFieldLength and the MaxRequestBytes registry entries, use the following calculations:

    1. Calculate the size of the user’s Kerberos token using the formula described in the following article:

      New resolution for problems with Kerberos authentication when users belong to many groups

    2. Configure the MaxFieldLength and the MaxRequestBytes registry keys on the IIS server with a value of 4/3 * T, where is the user’s token size, in bytes.  HTTP encodes the Kerberos token using base64 encoding and therefore replaces every 3 bytes in the token with 4 base64 encoded bytes.  Changes that are made to the registry will not take effect until you restart the HTTP service. Additionally, you may have to restart any related IIS services.


NOTE: Depending on your application environment, you could also consider configuring the web site to use NTLM instead of Kerberos to work around this problem.  Some application environments require Kerberos to be used for delegation purposes, and Kerberos is more secure than NTLM, so it is recommended that you do not disable Kerberos before considering the security and delegation ramifications of doing so.


More Information

By default, the MaxFieldLength registry entry is not present. This registry entry specifies the maximum size limit of each HTTP request header. The MaxRequestBytes registry entry specifies the upper limit for the total size of the Request line and the headers. Typically, this registry entry is configured together with the MaxRequestBytes registry entry. If the MaxRequestBytes value is lower than the MaxFieldLength value, the MaxFieldLength value is adjusted.  In large Active Directory environments, users may experience logon failures if the values for both these entries are not set to a sufficiently high value.

For Internet Information Services (IIS) 6.0 and later, the MaxFieldLength and MaxRequestBytes registry keys are located at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters.  Configure them as shown in the following table:


Name  Value Type  Value Data
 MaxFieldLength  DWORD  (4/3 * T bytes) + 200
 MaxRequestBytes  DWORD  (4/3 * T bytes) + 200


Alternatively you may set the registry keys to their maximum values shown below. The Administrator should consider all potential security ramifications if he makes any changes to the registry settings:

Name  Value Type Value Data
 MaxFieldLength  DWORD  65534
 MaxRequestBytes  DWORD  16777216


IMPORTANT: Changing these registry keys can be considered extremely dangerous. These keys allow larger HTTP packets to be sent to IIS, which in turn may cause Http.sys to use more memory and may increase vulnerability to malicious attacks.


NOTE: If MaxFieldLength is configured to its maximum value of 64KB, then the MaxTokenSize registry value should be set to 3/4 * 64 = 48KB.  For more information on the MaxTokenSize setting, please see the Microsoft knowledge base article KB327825 listed below.


Remove – AzureRmApplicationGateway

If you want to remove Azure Application Gateway Backend HTTP Settings / Probe configs / Backend Address Pools / HTTP Listeners or something else from the Azure Application Gateway, you might end up with the same Microsoft Doc’s as me.

As you see from the post it tells you what to do, but when you check the config in the Portal, it is not gone.

What is missing from the information feed here is that you get the Azure Application Gateway info, then you remove it, and get the code to define it, so the missing ingredient is:

Set-AzureRmApplicationGateway -ApplicationGateway $AppGw

That way you list out the current config, then get the new config, then SET the new config.

And now it is correct in the GUI too 🙂

Can I add CPU Core and RAM to Azure VM?

Short answer (for you that just want to know):  No you can’t
A little more info (for those that want it):
I set up a server for a customer DS13_V2 (8 Core, 56 GB RAM)
They tested the server and would like 2 more cores, and then they would be very happy with the power of Azure. That server would replace a 64 Core on-prem server (WOW, that is some power up there in Azure)
I haven’t even thought about just adding CPU or RAM to my Azure servers. Normally it is HDDs, Network, and so on they want.
So I had a look into it, but in the end I had to contact Microsoft Azure Support and ask if it is possible.
They replied within 30 min (10 out of 10 on reply speed), but the answer was not as uplifting. It is not possible to just add a CPU core or more RAM. You will need to upgrade the size of the server. And next level up for us is DS14_V2 (16 cores, 112GB RAM), BUT that would cost us over $600 more pr.month, and we only need 2 more itsy-bitsy CPU cores.
But as MS support said, they also hope the option to add more cores and so on will come to Azure in the near future.
So now I will have to evaluate if 8 Cores are enough, or if we feel the need to spend $600 more pr.month.
BUT I do dig​ Azure!!

Change the name of your new Azure subscription

​​This is not a straight forward thing, even do you would think so.

1. ​Open browser, enter the following:

  1. ​Sign inn with your Azure Subscription Owner ID
  2. Press the Subscription you want to change the name of
  3. On the right hand side, “Edit Subscription details”
    1. azure_rename_subscription_2.PNG
    2. Enter the new name bellow “ SUBSCRIPTION NAME”

Name is now changed, but you need to give access to the subscription, so that you can use it in your environment.​

2. Still in the view

  1. ​​Press the Portal icon upper right
  2. Find the subscriptions icon on the left side menu
  3. Press the subscription you want to give access to
  4. Press the MSN icon, so you can choose Users
  5. Press Add
  6. Choose the access level you want to delegate, and find the user.

Now you can utilize the new subscription in your Azure portal

Azure Admin Pages / URLs

There are so many different locations for Azure Subscription Management and what you can do where. So I have for my own sake made this list.

EA Management Portal​

  • View billing
  • Add and View Subscriptions

Manage Windows Azure

  • ​Add and View Subscriptions
  • Delegate the use of Subscription (so it shows up in the Azure Portal for the Administrators (that are to use it))
  • Manage most Azure stuff

Portal Azure​

  • View and Use Subscription

Account Windows Azure

  • Rename, Add and View Subscriptions

It is a pain in the ass to remember the different locations, but here have it (for now)

Create #HASHED password file for PowerShell use

If you want to automate some Powershell scripts to do a job for you, and you don’t want to (and you never should) add the password in the script, then this is a great ting.

You create an encrypted txt file based on the userID and PW you define in the prompt, the file is then created with the password (password only) information in the encrypted file. The export location must be the location you want the script to run, as you can not move/copy the file to a different location after export.
So now you can use the password file with the scripts you have created
# NAME: Encrypt Password for use in Powershell
# AUTHOR: Vincent Christiansen,
# DATE  : 21/01/2016
# COMMENT: Will prompt you for username and password, and will encrypt (to hash) the password to a txt file.
#          This will only be the password. And you must dump the file to the location where you are going to 
#          get it from in the other script
# ==============================================================================================
$credential = Get-Credential
$credential.Password | ConvertFrom-SecureString | Set-Content D:\Scripts\Azure_Encrypted_Password.txt

Connect to Azure/Office365 based on encrypted txt file

This will create a remote PowerShell session to your Azure/Office365 based on the Username you specify and the #hashed password text file you created in the previous​ post (that you can find here)
# NAME: Connect to Azure/Office365
# AUTHOR: Vincent Christiansen,
# DATE  : 21/01/2016
# COMMENT: This script will create a remote session in Azure/Office365 based on the encrypted file you have created.
# ​​=============================================================================================
$username = “”
$encrypted = Get-Content “D:\Scripts\Azure_Encrypted_Password.txt” | ConvertTo-SecureString
$cred = New-Object System.Management.Automation.PsCredential($username,$encrypted)
Import-Module MSOnline
Connect-MsolService -Credential $cred
$ExchangeSession = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $cred -Authentication Basic -AllowRedirection
$importresults = Import-PSSession $ExchangeSession

Get MAC address from remote computer

In some settings you need to get a remote computers MAC addresses. And you don’t have access to it physically.

  1. ​Open a CMD window with you Administrative user (one that has admin access to computer objects)
  2. Ping the computer name (to get IP)
    Wait for reply..
  3. type inn getmac /s /v ​

Now you get a list with the MAC addresses.

Update UPN on multiple users

I recently did an LDIFDE import of a lot of users to a test domain, and the UPN is not sett on the user objects.

So to change/set the UPN for all my users in the test domain I used this little​ string. Worked like a charm

I gets all the users objects in the Domain, and it sets the UPN to

Get-ADUser -searchbase “DC=sameie,DC=com” -filter * | foreach {set-adusers $_ -userprincipalname (“{0}@{1}” -f $_.samaccountname,””)}

Azure Server Sizes and information regarding them

This information is collected from: Cynthia Nottingham​, and her ar​ticle: virtual-machines-windows-sizes​

​​​The standard sizes consist of several series: A, D, DS, F, Fs, G, and GS. Considerations for some of these sizes include:

  • D-series VMs are designed to run applications that demand higher compute power and temporary disk performance. D-series VMs provide faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the temporary disk. For details, see the announcement on the Azure blog, New D-Series Virtual Machine Sizes.
  • Dv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-series CPU is about 35% faster than the D-series CPU. It is based on the latest generation 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, and with the Intel Turbo Boost Technology 2.0, can go up to 3.1 GHz. The Dv2-series has the same memory and disk configurations as the D-series.
  • F-series is based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, which can achieve clock speeds as high as 3.1 GHz with the Intel Turbo Boost Technology 2.0. This is the same CPU performance as the Dv2-series of VMs. At a lower per-hour list price, the F-series is the best value in price-performance in the Azure portfolio based on the Azure Compute Unit (ACU) per core.

    The F-series also introduces a new standard in VM size naming for Azure. For this series and VM sizes released in the future, the numeric value after the family name letter will match the number of CPU cores. Additional capabilities, such as optimized for premium storage, will be designated by letters following the numeric CPU core count. This naming format will be used for future VM sizes released but will not retroactively change the names of any existing VM sizes which have been released.

  • G-series VMs offer the most memory and run on hosts that have Intel Xeon E5 V3 family processors.
  • DS-series, DSv2-series, Fs-series and GS-series VMs can use Premium Storage, which provides high-performance, low-latency storage for I/O intensive workloads. These VMs use solid-state drives (SSDs) to host a virtual machine’s disks and also provide a local SSD disk cache. Premium Storage is available in certain regions. For details, see Premium Storage: High-performance storage for Azure virtual machine workloads.
  • The A-series VMs can be deployed on a variety of hardware types and processors. The size is throttled, based upon the hardware, to offer consistent processor performance for the running instance, regardless of the hardware it is deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from within the Virtual Machine.
  • The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer deployments may impact the performance of your running workload. The relative performance is outlined below as the expected baseline, subject to an approximate variability of 15 percent.

The size of the virtual machine affects the pricing. The size also affects the processing, memory, and storage capacity of the virtual machine. Storage costs are calculated separately based on used pages in the storage account. For details, see Virtual Machines Pricing Detailsand Azure Storage Pricing.

The following considerations might help you decide on a size:

  • The A8-A11 sizes are also known as compute-intensive instances. The hardware that runs these sizes is designed and optimized for compute-intensive and network-intensive applications, including high-performance computing (HPC) cluster applications, modeling, and simulations. For detailed information and considerations about using these sizes, seeAbout the A8, A9, A10, and A11 compute intensive instances.
  • Dv2-series, D-series, G-series, and the DS/GS counterparts are ideal for applications that demand faster CPUs, better local disk performance, or have higher memory demands. They offer a powerful combination for many enterprise-grade applications.
  • The F-series VMs are an excellent choice for workloads that demand faster CPUs but do not need as much memory or local SSD per CPU core. Workloads such as analytics, gaming servers, web servers, and batch processing will benefit from the value of the F-series.
  • Some of the physical hosts in Azure data centers may not support larger virtual machine sizes, such as A5 – A11. As a result, you may see the error message Failed to configure virtual machine or Failed to create virtual machine when resizing an existing virtual machine to a new size; creating a new virtual machine in a virtual network created before April 16, 2013; or adding a new virtual machine to an existing cloud service. See Error: “Failed to configure virtual machine” on the support forum for workarounds for each deployment scenario.​​