It also adds some other capabilities that we will review in this blog.
There can be numerous reasons to start using deployment stacks; below are some of the main reasons:
Deployment stacks can be implemented and managed via PowerShell and the Azure CLI. Currently, there is no option to control them via the portal, but you can view them; maybe you have noticed this already within the portal on the different scopes.
Within PowerShell, you have different functions to manage deployment stacks:
When looking into the functions for creating and updating deployment stacks, you may have noticed that you can add a DenySettingsMode with a couple of other settings; these settings are referred to as deny settings. The following options can be configured:
If you look into these deny settings, you may have noticed that these settings give us an excellent value for a specific case. About this case, I have been asked a lot of times.
Imagine a strict environment where you can deploy resources with a managed identity. But you are not allowed to update the resources, and you want to prevent changes made in manual ways. Using the existing RBAC options does not give an option as this was hard to handle due to the inheritance capabilities. Using deployment stacks, you can set a DennySettingMode to ‘DenyWriteAndDelete’ and add the managed identity to the DenySettingsExcludedPrincipal option. This will ensure that nobody but the managed identity can alter the resources.
Deployment stacks requires Azure PowerShell version 10.1.0 or later or Azure CLI version 2.50.0 or later.
To start with deployment stacks, create a bicep file that deploys your resources. For this article, we will deploy two resource groups in which we will deploy two storage accounts.
A deployment stacks says something about your resources and not your resource groups. So make sure all resource groups are there before deploying your stack.
targetScope = 'subscription'
param location string = 'westeurope'
module str1 'storageaccount.bicep' = {
name: 'deployment-str1'
scope: resourceGroup('sponsor-rg-rg1')
params: {
name: 'stackstr1'
location: location
}
}
module str2 'storageaccount.bicep' = {
name: 'deployment-str2'
scope: resourceGroup('sponsor-rg-rg2')
params: {
name: 'stackstr2'
location: location
}
}
As you can see, this is a straightforward bicep file that deploys two storage accounts in a separate resource group. For the storage accounts, we used a simple module displayed below.
At the time of writing this article you are not able to use a function like ‘deployment().location’ to retrieve the deployment location. The deployment of the stack isn’t also visible in the deployment section of the portal.
param location string = resourceGroup().location
param name string
resource str 'Microsoft.Storage/storageAccounts@2023-01-01' = {
name: toLower('str${name}')
location: location
sku: {
name: 'Standard_LRS'
}
kind: 'StorageV2'
}
output storageAccountName string = str.name
output storageAccountResourceId string = str.id
To deploy the stack, we will use the PowerShell function with the ‘New’ verb, and as we are deploying to a subscription, we will use the function ‘New-AzSubscriptionDeploymentStack.’
New-AzResourceGroup -Name "sponsor-rg-stacks" -Location "westeurope"
New-AzResourceGroup -Name "sponsor-rg-rg1" -Location "westeurope"
New-AzResourceGroup -Name "sponsor-rg-rg2" -Location "westeurope"
New-AzSubscriptionDeploymentStack `
-Name "stack-demo-01" `
-Location "westeurope" `
-TemplateFile ".\main.bicep" `
-DeploymentResourceGroupName "sponsor-rg-stacks" `
-DenySettingsMode "none"
The resource group specified in the ‘DeploymentResourceGroupName’ is where the deployment stacks resource will be deployed. When writing, this looks like a simple pointer for the deployment because there isn’t any specific resource in the Azure portal in the resource group.
But if we look under the deployment stacks option, we see our newly created stack.
When you want to update your stack by adding resources, this resource must be added to the bicep file.
targetScope = 'subscription'
param location string = 'westeurope'
module str1 'storageaccount.bicep' = {
name: 'deployment-str1'
scope: resourceGroup('sponsor-rg-rg1')
params: {
name: 'stackstr1'
location: location
}
}
module str2 'storageaccount.bicep' = {
name: 'deployment-str2'
scope: resourceGroup('sponsor-rg-rg2')
params: {
name: 'stackstr2'
location: location
}
}
module str3 'storageaccount.bicep' = {
name: 'deployment-str3'
scope: resourceGroup('sponsor-rg-rg2')
params: {
name: 'stackstr3'
location: location
}
}
When the bicep is updated, the deployment stack can be edited.
Set-AzSubscriptionDeploymentStack `
-Name "stack-demo-01" `
-Location "westeurope" `
-TemplateFile ".\main.bicep" `
-DeploymentResourceGroupName "sponsor-rg-stacks" `
-DenySettingsMode "none"
I hope this article gave you some insights on what deployment stacks can do for you. Please note that this service is still in preview and is subject to change. To learn more, watch my blog and check the articles below.
]]>By signing your commit, you prove that the commit came from you, as it is straightforward to add anyone as an author by adding the ‘–author’ flag. By following the steps below, we will ensure we sign our commitments. After setting up git commit signing in GitHub, we will also look into setting up git configuration files for different folders. It will help you use different configurations like other user names for your commit, such as automatic commit signing in GitHub.
To get started, we need to create a GPG key. For this, we can use the command line tool called GnuPG. Install this command line and run a new terminal.
Within the terminal, use the following command to generate a new key.
gpg --full-generate-key
Choose option 4 (RSA Only) and pick the keysize 4096, then fill in the rest of the questions based on your requirements, and fill in a passphrase at the end.
Now that we have a key on our working station, it needs to be exported and added to GitHub to verify your commits. Go back to your terminal and perform the following command.
gpg --list-secret-keys --keyid-format=long
This command retrieves the secret keys where we will need the info behind the ‘/.’ For example, with the key ‘rsa3072/FA19266B55C69CB7’, we need this part ‘FA19266B55C69CB7’; the secret can be exported. We will refer to this part as the secret ID and need it later in this post.
gpg --armor --export FA19266B55C69CB7
Copy the key that is exported, beginning with —–BEGIN PGP PUBLIC KEY BLOCK—– and ending with —–END PGP PUBLIC KEY BLOCK—–.
Open up your GitHub Settings page and choose the option ‘SSH and GPG Keys’
Next to the GPG Keys header, click ‘New GPG Key’ and fill in the correct information. The copied GPG key must be passed into the ‘Key’ field. Then click the add GPG key button to add it.
Now that we have the Key generated and added to GitHub, we can make sure that Git is also aware of the keys and will sign the commits. We will configure git in a way that it will automatically sign our commits with the key we created. I am also an Azure DevOps user, and as mentioned in the introduction, I also use a lot of other environments with other email addresses; for this, we want to make a configuration so that we do not always sign the commits or have to add our passphrase. We will configure git to use specific configurations based on our work folder. Let’s start by opening the global git config file; this file is located in: ‘C:\Users[username]gitconfig.’ In this file, we remove any user-specific references and add conditions:
[filter "lfs"]
process = git-lfs filter-process
required = true
clean = git-lfs clean -- %f
smudge = git-lfs smudge -- %f
[includeIf "gitdir:D:/src/github/"]
path = C:/Users/MaikvanderGaag/.github/.gitconfig-personal
[includeIf "gitdir:D:/src/3fifty/"]
path = C:/Users/MaikvanderGaag/.github/.gitconfig-work
[includeIf "gitdir:D:/src/x/"]
path = C:/Users/MaikvanderGaag/.github/.gitconfig-x
After the changes, the file will look like the one above. As you can see, we included specific git configuration files based on the main folder. This was, for me, a handy situation as I always make separate folders per organization. A specific config file then looks like this.
.gitconfig-work
[user]
name = Maik van der Gaag
email = mail@corporate.eu
.gitconfig-personal
[user]
name = Maik van der Gaag
email = maik@personal.nl
signingkey = FA19266B55C69CB7
[commit]
gpgSign = true
[tag]
gpgSign = true
In the ‘.gitconfig-personal’ file, you can also see that for my user, I use a signing key that references the secret ID. By setting commit.gpgSign and tag.gpgSign tags and commits are automatically signed. The handy thing with this configuration is that base configurations can be done in ‘.gitconfig’, and any other specific configurations can be done in separate files.
You can run the following commands if you want to configure it in the global ‘.gitconfig’ file.
git config --global user.signingkey FA19266B55C69CB7
git config --global commit.gpgsign true
git config --global tag.gpgsign true
When using multiple folders for your configuration, it is sometimes handy to test if the configurations are set correctly. You can check this with the following command.
git config -l
One of the common issues that I came around was that git wasn’t able to find my keys. For this, we added a part in the configuration.
[gpg]
program = C:\\Program Files (x86)\\GnuPG\\bin\\gpg.exe
This can also be added by a shell command.
git config --global gpg.program 'C:\\Program Files (x86)\\GnuPG\\bin\\gpg.exe'
]]>Creating these API Connections with Infrastructure as Code isn't documented well and is challenging to figure out. It took me some time, but I figured it out by looking at the API requests that the portal does.
The steps I have taken to figure it out can be applied in different scenarios for Logic Apps but, for example, also on other parts of the portal.
{ "id": "/subscriptions/f124b668-7e3d-4b53-ba80-09c364def1f3/providers/Microsoft.Web/locations/westeurope/managedApis/servicebus", "parameterValueSet": { "name": "managedIdentityAuth", "values": { "namespaceEndpoint": { "value": "sb://azsb-temp.servicebus.windows.net" } } }, "displayName": "servicebus-auth", "kind": "V1", "location": "westeurope" }
With the findings, the specific Bicep code can be written. Below are three different API connections that use Managed Identities for the connection.
resource storageaccountApiConnectionAuth 'Microsoft.Web/connections@2016-06-01'= { name: 'azuretables-auth' location: location properties: { api: { id: 'subscriptions/${subscription().subscriptionId}/providers/Microsoft.Web/locations/${location}/managedApis/azuretables' } parameterValueSet: { name: 'managedIdentityAuth' values: {} } displayName: 'azuretables-auth' } }
resource azla_apiconnection_servicebus_auth 'Microsoft.Web/connections@2016-06-01' = { name: 'servicebus-auth' location: location properties: { displayName: 'servicebus-auth' api: { id: subscriptionResourceId('Microsoft.Web/locations/managedApis', location, 'servicebus') } parameterValueSet: { name: 'managedIdentityAuth' values: { namespaceEndpoint: { value: 'sb://${serviceBus}.servicebus.windows.net/' } } } } }
I hope that this helps you in creating epic Bicep files. If you are looking for more information, be sure to look at the following:
]]>In this blog post, we'll explore exposing an Azure App Registration as an API, including the necessary configuration to authenticate towards the application when the application is configured with 'User Assignments Required' turned on. This short guide tells you how to configure this.
This guide talks about two different Application Registrations.
Ensure the application you are authenticating to (1) has an Application ID Url configured within the App Registration blade of the application.
If this is not configured, make sure to add it.
An 'App Role' needs to be defined to authenticate your application. For this, go to the 'App Role' blade for the App Registration you are authenticating to (1).
If an App Role does not exist, create a new one and fill in the required properties. Make sure also to select Applications in the allowed "member types" and enable it. Adding these roles makes sure that the roles are added to the token of the application.
Field | Description | Example |
Display Name | Display the name for the app role that appears in the admin consent and app assignment experiences. This value may contain spaces. | Survey Writer |
Allowed member types | Specifies whether this app role can be assigned to users, applications, or both. | Users/Groups |
Value | Specifying the roles' value claim that the application should expect in the token. The value should match the string referenced in the application's code, which can't contain spaces. | Survey. Create |
Description | A more detailed description of the app role displayed during admin app assignment and consent experiences. | Writers can create surveys. |
Do you want to enable this app role? | Specifies whether the app role is enabled. To delete an app role, deselect this checkbox and apply the change before attempting the delete operation. | Checked |
On the 'API Permission' blade of the application you are authenticating with (2), the required permissions for the application need to be configured. In the blade, click 'Add permission.'
Then go to the tab "APIs" my organization uses and search for your App Registration. You should be able to see the name within the list.
Click on the application. On the next screen, you should be able to see the roles that you can choose. Select the permissions that are required and click on "Add Permissions."
These types of app roles require an 'Admin Consent.' After adding the permission, you will be returned to the API permissions blade. In this blade, click on 'Grant admin consent for.'
]]>This also has an impact on the application landscape of businesses within Azure. As Azure also evolves, the applications/services that are used evolve. This is not always going in a correct manner where there is time to remove technical debt. As the landscape expands, new services are created, and configurations are added.
This also surfaces a problem: when a configuration needs to be changed, this must be done on multiple locations, and you are bound to forget one.
This is where Azure App Configuration comes in. Azure App Configuration is a fully managed service that lets you centralize your application's configuration and feature management. It helps to store and manage configuration data and feature flags in a centralized location, which multiple applications and environments can access.
One of the key benefits of using Azure App Configuration is that it allows you to manage the configuration of your applications in a consistent and organized manner. Instead of hardcoding configuration values into your application's codebase, you can store them in Azure App Configuration and retrieve them at runtime. This makes it easier to manage and update your application's configuration without redeploying your code.
App Configuration is already a pervasive solution that (at the time of writing this article) has the following capabilities:
App Configuration can be used within many frameworks by using a specific client or provider or by using the Rest API:
When you would like to use it in, for example, PowerShell, you could leverage the API.
As the feature list above isn't already enough, Azure App Configuration also has KeyVault integration. With the KeyVault integration, you can add configurations referencing a KeyVault secret. Azure App Configuration will redirect you (your principal with a correct token) to retrieve the value from the KeyVault without noticing anything.
Of course, you can get started by using the Azure Portal, PowerShell, or the CLI, but let's check if we can create the service using Bicep.
The bicep for setting up Azure App Configuration is very easy. Let's take a look at the example below.
resource configStore 'Microsoft.AppConfiguration/configurationStores@2021-10-01-preview' = { name: 'azappconfiguration-${name}' location: location sku: { name: 'standard' } properties:{ disableLocalAuth: true enablePurgeProtection: true softDeleteRetentionInDays:7 } }
The code snippet creates a configuration store with the 'standard' SKU, enables purge protection, and sets the soft delete retention to 7 days. Next to that, it also disables the local authentication, meaning that you cannot authenticate to the configuration store by using a key but are required to use a token to authenticate.
Configurations, secret references, and features can also be added by using Bicep. For this, I created a handy module.
param configStoreName string param configItems array resource configStore 'Microsoft.AppConfiguration/configurationStores@2021-10-01-preview' existing = { name: configStoreName } resource configStoreKeyValue 'Microsoft.AppConfiguration/configurationStores/keyValues@2021-10-01-preview' = [for item in configItems: { parent: configStore name: (!item.featureFlag) ? item.name : '.appconfig.featureflag~2F${item.name}' properties: { value: (!item.featureFlag) ? item.value : '{"id": "${item.name}", "description": "", "enabled": false, "conditions": {"client_filters":[]}}' tags: item.tags contentType:item.contentType } }]
The item will be configured correctly based on the array supplied as a parameter. A sample array for the configuration could look like the snippet below.
[ { name: 'Bicep:Config:Value' value: 'Test from Bicep' contenttype: '' featureFlag: false tags: { Bicep: 'Deployed' } } { name: 'Bicep:Secret:KeyVault' value: 'https://azkv-appconfiguration123.vault.azure.net/secrets/bicep-configuration-secret' contenttype: 'application/vnd.microsoft.appconfig.keyvaultref+json;charset=utf-8' featureFlag: false tags: { Bicep: 'Deployed' } } { name: 'bicep-featureflag' value: '' contenttype: 'application/vnd.microsoft.appconfig.ff+json;charset=utf-8' featureFlag: true tags: { Bicep: 'Deployed' } } ]
Adding Azure App Configuration in code is very easy. This article will look into C# and .Net Core 6. Make sure you add the following prerequisites as NuGet packages to your project.
The next step is to add the following code to your application startup.
var endpoint = app.Configuration["AppConfig:Endpoint"]; builder.Configuration.AddAzureAppConfiguration(options => { options.Connect(new Uri(endpoint), new DefaultAzureCredential()) .ConfigureKeyVault(kv => { kv.SetCredential(new DefaultAzureCredential()); }) .Select("Demo:*", LabelFilter.Null) .ConfigureRefresh(refreshOptions => refreshOptions.Register("Demo:Config:Sentinel", refreshAll: true)); });
On line 1 of the snippet, we retrieve the configuration store's endpoint and then aff Azure App Configuration to the application. By using 'DefaultAzureCredential', we make sure that we connect to the configuration store by the managed identity of the service. On line 4, we then set up the connection to the KeyVault to retrieve values and specify that for this, we also want to use the managed identity.
With the 'Select' we start specifying which configurations we want. In this example, we want all configurations that start with 'Demo:' and do not have a label. Using the labels, we could have specified an environment, for example.
On line 9, we then configure the refresh options to ensure that the application configurations are automatically refreshed when we update the sentinel key "Demo:Config:Sentinel" in the configuration store.
When you would also like to make use of feature management, you would also add the following lines
options.UseFeatureFlags(featureFlagOptions => { featureFlagOptions.Select("DemoApp-*", app.Environment.EnvironmentName); featureFlagOptions.CacheExpirationInterval = TimeSpan.FromSeconds(30); });
Using the configuration is now very easy the below snippet is a function that retrieves a configuration value.
public class DummyFunction { private readonly IConfiguration _configuration; public DummyFunction(IConfiguration configuration) { _configuration = configuration; } [FunctionName("DummyFunction")] public async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string configKey = "DemoFunc:Message"; string message = _configuration[configKey]; log.LogInformation($"Found the config in Azure App Configuration {message}"); string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); string responseMessage = string.IsNullOrEmpty(message) ? "There is no configuration value with the key 'Demo:FunctionApp:Message' in Azure App Configuration" : message; return new OkObjectResult(responseMessage); } }
Also, for features, this more or less looks the same. The only difference here is that we use a so-called FeatureManager. For this snippet, we removed some lines of code for simplicity.
public class WCController : ControllerBase { private readonly ILogger<WCController> _logger; private readonly IFeatureManager _featureManager; public WCController(ILogger<WCController> logger, IFeatureManager manager) { _logger = logger; _featureManager = manager; } [HttpGet(Name = "GetTeams")] public IEnumerable<Teams> Get() { IEnumerable<Teams> retVal = new List<Teams>(); if (_featureManager.IsEnabledAsync("DemoApi-Points").Result) { } else { } return retVal; } }
When using features, you also have some other nice options as:
[FeatureGate("DemoApi-WC")]
<feature name="DemoApp-Beta"> <p>Beta feature is enabled!</p> </feature>
In a real-life scenario, configurations are mostly managed and deployed from Azure DevOps. Azure App Configuration can also help in these situations because Azure DevOps has a task that retrieves configuration values and converts them to variables.
Take a look at the pipeline in the below snippet. In the first steps, the configuration is retrieved and later displayed with a PowerShell task. Good to mention here as well is that KeyVault references are also retrieved and specified as secure variables.
trigger: - main pool: vmImage: ubuntu-latest steps: - task: AzureAppConfiguration@5 displayName: Get Azure App Configurations inputs: azureSubscription: 'sub-sub-sub' AppConfigurationEndpoint: 'https://azapp-sub.azconfig.io' KeyFilter: 'DevOps:*' - task: PowerShell@2 displayName: Display values from App Configuration inputs: targetType: 'inline' script: | Write-Host "Regular value: $(DevOps:DemoValue)" Write-Host "Secret value: $(DevOps:Secret:DevOpsSecret)"
In conclusion, Azure App Configuration is a powerful service for managing your applications' configuration and feature management. Centralizing your configuration data and providing feature management capabilities helps you build more flexible and maintainable applications.
If you want to see the code in more detail and look at different examples, go check out my GitHub repo.
As a small present for Christmas, I also wanted to share an option we use very often. That option is using Azure App Configuration from PowerShell.
Function Get-AzAppConfigurationKey { Param( [parameter(Mandatory = $true)][string]$AppConfiguration, [parameter(Mandatory = $true)][string]$Key )]]>
To integrate the security checks in your pipeline and, Ideally, also in your pull request annotations, some prerequisites are needed that are by default not in Azure DevOps:
These extensions can be installed from the Visual Studio Marketplace:
The Microsoft Security DevOps extension is a wrapper around the Microsoft.Security.DevOps.Cli. The CLI is the Microsoft Security DevOps (MSDO), a command-line application that integrates static analysis tools into the development cycle.
The tool installs and configures static analysis tools and saves the results in a format called SARIF. In the table below, the tools it uses are listed.
First, create a new Pipeline in Azure DevOps and make sure that the pipeline supports ".Net 3.1" and ".Net 6.0". These are required to run the Security DevOps Extension, which can be done by adding the tasks below to the pipeline.
- task: UseDotNet@2 displayName: 'Use dotnet 3.1' inputs: version: 3.1.x - task: UseDotNet@2 displayName: 'Use dotnet 6.0' inputs: version: 6.0.x
These actions must be run before the extension itself to ensure that all components on the build agent are configured, and the Security for DevOps scan can run successfully.
- task: MicrosoftSecurityDevOps@1 displayName: 'Run Microsoft Security DevOps'
The above task executes the scanner and publishes the result by default in the "CodeAnalysisLogs" artifact. To display the scan results, this artifact needs to be published. When published, the scan results will appear in the pipeline's "Scans" tab. To publish the results, add the below Publish task.
- task: PublishBuildArtifacts@1 condition: always() inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: 'CodeAnalysisLogs' publishLocation: 'Container'
Putting all of this together ensures that the results are published and viewable in the "Scans" tab when you run the pipeline. The example below shows the result of a scan on one of my repos that contains some bicep files.
The complete pipeline YAML file looks like below. Some parts are left out for simplicity.
trigger: - main pool: vmImage: windows-latest variables: - name: system.debug value: true steps: - task: UseDotNet@2 displayName: 'Use dotnet 3.1' inputs: version: 3.1.x - task: UseDotNet@2 displayName: 'Use dotnet 6.0' inputs: version: 6.0.x - task: MicrosoftSecurityDevOps@1 displayName: 'Run Microsoft Security DevOps' - task: PublishBuildArtifacts@1 condition: always() inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)' ArtifactName: 'CodeAnalysisLogs' publishLocation: 'Container'
If you want to learn more about Microsoft Defender for Cloud or especially Defender for DevOps, check out the following resources:
A connection to the source management system is required to get these insights. With this connection, you allow Defender for Cloud to discover the resources in, for example, your Azure DevOps organization or your GitHub Repositories.
In the following steps, we will go over the procedure to connect Defender for DevOps to your Azure DevOps organization:
After a while will start to pop up in Defender for DevOps. With this, you will be able to mitigate vulnerabilities in Azure DevOps.
Connecting GitHub is almost the same as connecting Azure DevOps except for the connection and authorization. Let's discover this by following the below steps:
Just as with the DevOps connection for Azure DevOps, it will take a while before the information is displayed in the Defender for DevOps tab.
You can check the resource group for the created resources/connections if interested. To see the resources check the box "Show hidden types" to make the connections visible.
If you want to learn more about Microsoft Defender for Cloud or especially Defender for DevOps, check out the following resources:
]]>In a larger environment, we would solve this by creating multiple subscriptions, but as we do not have multiple sponsorship subscriptions, we came up with another idea.
We categorize resource groups by using tags and came up with the idea to set up access rights on the resource groups based on the tags that are supplied on newly created resource groups. To get this operational, a colleague and I thought out a new custom policy that he created that I am sharing with the community.
For this policy, we use the policy effect 'deployIfnotexists'. By using this effect, we have the option to execute a deployment when a new resource group is created by using the below policy rule.
"if": { "allOf": [ { "field": "type", "equals": "Microsoft.Resources/subscriptions/resourceGroups" }, { "field": "[concat('tags[', parameters('tagName'), ']')]", "equals": "[parameters('tagValue')]" } ] }
This rule checks if the type of the resource is a resource group and if it contains a tag with a specific value. The tag and its value it needs to check up on are specified when assigning the policy to a specific scope.
The 'then' of the rule will then execute a deployment, which is just a general RBAC deployment via ARM.
"then": { "effect": "deployIfNotExists", "details": { "EvaluationDelay": "AfterProvisioningSuccess", "roleDefinitionIds": [ "/providers/microsoft.authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635" ], "type": "Microsoft.Authorization/roleAssignments", "existenceCondition": { "allOf": [ { "field": "Microsoft.Authorization/roleAssignments/roleDefinitionId", "equals": "[concat('/subscriptions/',subscription().subscriptionId ,parameters('roleId'))]" }, { "field": "Microsoft.Authorization/roleAssignments/principalId", "equals": "[parameters('principalId')]" } ] }, "deployment": { "properties": { "mode": "incremental", "parameters": { "principalType": { "value": "[parameters('principalType')]" }, "principalId": { "value": "[parameters('principalId')]" }, "roleId": { "value": "[parameters('roleId')]" } }, "template": { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "principalType": { "type": "string" }, "principalId": { "type": "string" }, "roleId": { "type": "string" } }, "variables": {}, "resources": [ { "name": "[guid(resourceGroup().id, deployment().name)]", "type": "Microsoft.Authorization/roleAssignments", "apiVersion": "2020-10-01-preview", "properties": { "principalId": "[parameters('principalId')]", "roleDefinitionId": "[parameters('roleId')]", "principalType": "[parameters('principalType')]" } } ], "outputs": { "policy": { "type": "string", "value": "[concat('Added RBAC Rights')]" } } } } } } }
On the 'deployifnotexists', there is also an 'EvaluationDelay' specified. This specifies when the existence of the related resources should be evaluated. The delay is only used for evaluations that are a result of a create or update resource request. So the evaluation is done after the provisioning is succeeded.
The complete policy definitions then look like this.
{ "properties": { "displayName": "Add access rights based on tags", "policyType": "Custom", "mode": "All", "description": "Policy to add access rights based on tags added to a resource group", "metadata": { "version": "1.0.0", "category": "Custom" }, "parameters": { "tagName": { "type": "String", "metadata": { "displayName": "Tag Name", "description": "The Tag name to audit against (i.e. Environment CostCenter etc.)" }, "defaultValue": "Environment" }, "tagValue": { "type": "String", "metadata": { "displayName": "Tag Value", "description": "Value of the tag to audit against (i.e. Prod/UAT/TEST 12345 etc.)" } }, "roleId": { "type": "string", "metadata": { "displayName": "roleId", "description": "roleId", "strongType": "Microsoft.Authorization/roleDefinitions" } }, "principalId": { "type": "string", "metadata": { "displayName": "principalId", "description": "principalId" } }, "principalType": { "type": "string", "metadata": { "displayName": "principalType", "description": "principalType" }, "allowedValues": [ "Device", "ForeignGroup", "Group", "ServicePrincipal", "User" ] } }, "policyRule": { "if": { "allOf": [ { "field": "type", "equals": "Microsoft.Resources/subscriptions/resourceGroups" }, { "field": "[concat('tags[', parameters('tagName'), ']')]", "equals": "[parameters('tagValue')]" } ] }, "then": { "effect": "deployIfNotExists", "details": { "EvaluationDelay": "AfterProvisioningSuccess", "roleDefinitionIds": [ "/providers/microsoft.authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635" ], "type": "Microsoft.Authorization/roleAssignments", "existenceCondition": { "allOf": [ { "field": "Microsoft.Authorization/roleAssignments/roleDefinitionId", "equals": "[concat('/subscriptions/',subscription().subscriptionId ,parameters('roleId'))]" }, { "field": "Microsoft.Authorization/roleAssignments/principalId", "equals": "[parameters('principalId')]" } ] }, "deployment": { "properties": { "mode": "incremental", "parameters": { "principalType": { "value": "[parameters('principalType')]" }, "principalId": { "value": "[parameters('principalId')]" }, "roleId": { "value": "[parameters('roleId')]" } }, "template": { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "principalType": { "type": "string" }, "principalId": { "type": "string" }, "roleId": { "type": "string" } }, "variables": {}, "resources": [ { "name": "[guid(resourceGroup().id, deployment().name)]", "type": "Microsoft.Authorization/roleAssignments", "apiVersion": "2020-10-01-preview", "properties": { "principalId": "[parameters('principalId')]", "roleDefinitionId": "[parameters('roleId')]", "principalType": "[parameters('principalType')]" } } ], "outputs": { "policy": { "type": "string", "value": "[concat('Added RBAC Rights')]" } } } } } } } } } }
Check out GitHub for the script files and the policy also supplied in Bicep:
https://github.com/maikvandergaag/msft-azureplatform/tree/main/policies/add_access
]]>In part 2, we deployed the definitions via Azure DevOps pipelines, and for the Github Actions, we will also use the script file. But in GitHub Actions, there is also another option for managing policy definitions.
For this test, we will reuse the principal that we created in part 2 of the series. The credentials for this will be saved within a Github Secret.
The information for the authentication is saved within so-called secrets that are encrypted within GitHub that are saved on the organization, repository, or repository environment level. The credential information for the authentication against Azure is saved in JSON format.
{ "clientId": "[clientId]", "clientSecret": "[clientSecret]", "subscriptionId": "[subscription id]", "tenantId": "[Azure Active Directory Tenant Id]" }
To save the credential information, you can follow the below steps:
With the credentials saved, we can get started with the workflow. Let's create a new workflow in the UI and give it a name and file name and remove the information that is not required. To work with Azure, we will use the so-called Azure steps and start with the login step.
Add the 'azure/login' step and connect it to the correct secret. The YAML snippet is below.
- uses: azure/login@v1 with: creds: $. enable-AzPSSession: true
"Make sure you add 'enable-AzPSSession: true' if you want to make use of Azure PowerShell in the workflow."
In this task, you see the reference to the secret we saved in the previous paragraph.
If you want to start from GitHub and deploy your definitions that haven't been deployed to Azure yet, you can reuse the script that we used in part 2 and execute it within GitHub actions via the Azure PowerShell Action.
- name: Run Azure PowerShell script uses: azure/powershell@v1 with: inlineScript: | ./scripts/azpolicy.ps1 -Scope "$" -ScopeName "$" -PolicyFolder "$" azPSVersion: "latest"
We reference the same script file and supply it with the correct arguments in the task. For your reference, the complete Github Actions file would look like below.
name: Policy - All Policies on: workflow_dispatch: inputs: remarks: description: 'Reason for triggering the workflow run' required: false default: 'Updating Azure Policies' env: Folder: './deploy' Scope: 'ManagementGroup' ScopeName: '324f7296-1869-4489-b11e-912351f38ead' jobs: build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - uses: azure/login@v1 with: creds: $ enable-AzPSSession: true - name: Run Azure PowerShell script uses: azure/powershell@v1 with: inlineScript: | ./scripts/azpolicy.ps1 -Scope "$" -ScopeName "$" -PolicyFolder "$" azPSVersion: "latest"
GitHub actions contain a task (Manage Azure Policy) to manage policy definitions, deploy them to the correct scopes, and manage the assignments. The downside of this task is that it requires the definitions to be deployed on Azure because it references the ids.
To use this, it is best to export the definitions from Azure and work in the below folder hierarchy.
The name of the folders refers to the policy names, and the policy.json files contain the information of the policies. These files are the same as shown in part 1, except they have the id, type, and name properties. The task only needs to reference the policy folder, and you are good to go and manage your policy definitions.
name: Policy - All Policies on: workflow_dispatch: inputs: remarks: description: 'Reason for triggering the workflow run' required: false default: 'Updating Azure Policies' jobs: build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - uses: azure/login@v1 with: creds: $ - name: Create or Update Azure Policies uses: azure/manage-azure-policy@v0 with: paths: | policies/**
As you can see in the pipeline file, you may have noticed that this pipeline does not reference the management group. This is because the JSON file contains a reference to the id of the existing policy definition, so it knows where it is deployed.
With this task is, you can also manage your assignments by specifying that in the same folder structure when interested, take a look at the page of the extension.
Of course, this is not a production-grade solution, but it gives you the highlights on how to manage your policy definitions in code and how to deploy them.
The script shared in part 1 is adjusted to read out the policy definition files within a directory. This is done because you do not want to deploy just one definition per pipeline. Below the updated PowerShell script is added.
[CmdletBinding()] Param( [Parameter(Mandatory = $true)] [ValidateSet('Subscription', 'Managementgroup')] [string]$Scope, [Parameter(Mandatory = $true)] [string]$ScopeName, [Parameter(Mandatory = $true)] [string]$PolicyFolder, [Parameter(Mandatory = $false)] [string]$RoleIds ) $policyFiles = Get-ChildItem -Path $PolicyFolder -Recurse -Filter "*.json" foreach ($policyFile in $policyFiles) { Write-Output "Working on Policy: $($policyFile.Name)" $policyDefinitionFileContent = Get-Content -Raw -Path $PolicyFile $policyDefinitionFile = ConvertFrom-Json $policyDefinitionFileContent $policyDefinitionName = $policyDefinitionFile.properties.displayName $parameters = @{} $parameters.Add("Name", $policyDefinitionName) switch ($Scope) { "ManagementGroup" { $parameters.Add("ManagementGroupName", $ScopeName) } "Subscription" { $sub = Get-AzSubscription -SubscriptionName $ScopeName $parameters.Add("SubscriptionId", $sub.Id) } } $definition = Get-AzPolicyDefinition @parameters -ErrorAction SilentlyContinue $parameters.Add("Policy", $policyDefinitionFileContent) if ($definition) { Write-Output "Policy definition already exists, policy will be updated" } else { Write-Output "Policy does not exist" } New-AzPolicyDefinition @parameters }
Policy definitions can be saved in Azure DevOps as code. The definitions in source control can be deployed via Azure DevOps Pipelines.
To be able to start deploying the definitions, we need to have the following in place:
We will need a service principal to deploy policy definitions via Azure DevOps. Creating this can be done via the portal. When the principal is made, give that principal the correct permissions on the scope where you want to deploy the definitions. I will provide the principal access to the management group where the policies need to be deployed for this article.
Of course, we can give the principal owner permissions on the scope, but we want to stick to the least privileges. Therefore we provide the principal the 'Resource Policy Contributor' role, which is enough for deploying Azure Policies.
With the rights in place, the service connection in Azure DevOps can be configured. In Azure DevOps, create an "Azure Resource Manager" service connection and fill in the correct information regarding your platform.
Next up is the pipeline. For the pipeline, we will start with an empty one that we save in a GitHub repository, where we also save the policy definition files. From the empty pipeline, remove the default tasks and add an Azure PowerShell task that connects to the Service Connection we created.
Point this task to the correct script file in the repository and ensure the arguments are supplied using variables. The Yaml of the task will look like the snippet below.
- task: AzurePowerShell@5 inputs: azureSubscription: 'Root Management Group Connection' ScriptType: 'FilePath' ScriptPath: './scripts/azpolicy.ps1' ScriptArguments: '-Scope "$(scope)" -ScopeName "$(scopeName)" -PolicyFolder "$(folder)"' azurePowerShellVersion: 'LatestVersion' pwsh: true
Bringing this all together will result in a simple pipeline like below.
trigger: - main pool: vmImage: ubuntu-latest variables: - name: scope value: "ManagementGroup" - name: scopeName value: "mgName" - name: folder value: "./policies/deploy" steps: - task: AzurePowerShell@5 inputs: displayName: 'Deploy Azure Policy Definitions' azureSubscription: 'Root Management Group Connection' ScriptType: 'FilePath' ScriptPath: './scripts/azpolicy.ps1' ScriptArguments: '-Scope "$(scope)" -ScopeName "$(scopeName)" -PolicyFolder "$(folder)"' azurePowerShellVersion: 'LatestVersion' pwsh: true
Of course, this is not a production-grade solution, but it highlights how to manage your policy definitions in code and how to deploy them. In the following article, we will deploy definitions via GitHub Actions.