GitHub Actions Code Signing with Azure Key Vault HSM, RBAC, OIDC and Managed Identity

  • Home
  • Blog
  • GitHub Actions Code Signing with Azure Key Vault HSM, RBAC, OIDC and Managed Identity
Image

GitHub Actions Code Signing with Azure Key Vault HSM, RBAC, OIDC and Managed Identity

Today, I'd like to take a break from our usual supply chain attack topics and talk about something that is covered on the internet, but only if you look long enough, combine multiple sources and engage in some educated guesswork and experimentation.

NOTE:

  • Prices mentioned here are just examples; they may vary by region or over time. Use them only as a general "a lot of money"/"not a lot of money", rather than exact numbers to use in your calculations.
  • My approach to configuring Azure is web-based, so only clicking and no scripting, or templates.
  • If you can't find something in the UI, there's a chance that the UI changed after this text was written. Sorry for that, but resisting the progress is futile :).
  • Following text is a full story of what I did and why. If you're only after short "click this" version, then look for bullet point lists plus maybe one sentence before and after them. That should be enough.

Why would I want that

Since 1. June 2023 the code signing certificates have to be generated and stored on Hardware Security Modules (HSMs). Before that date there was a big difference in price between OV and EV certificates. OV (Organization Validated) required simplified verification of who you are and could be delivered as a simple certificate file, nothing fancy and quite cheap (you could find something for around $90/year). The EV (Extended Validation) required more papers, extended vetting and had to be secured by HSM. As a bonus they also prevent MS SmartScreen alerts about "unrecognized application", because being signed by EV certificate grants implicit trust, while OV needs to build up reputation over time before the alert goes away. Because of that the EV certificate was always more expensive (something around $400/year).

Right now, every type of codesigning certificate has to be HSM backed and not with "any HSM", but with one that is at least FIPS 140-2 Level 2 or Common Criteria EAL 4+ compliant. So, what happened now? Prices of EV certificates stayed the same, but the cheap OV? Now they're around $300/year and that's only if you have your own HSM, if you don't, you need to pay another $100 for a USB HSM dongle that will be delivered to you. The good thing about this situation, is that the previous "we'll just send you a file" was not quite secure, because it was easier to steal and abuse such certificate. Using HSM for generation and storage is much better from this point of view (and it's the main reason behind the industry-wide move).

With the new rules a "small" problem arises — we get a USB dongle that we need to plug into some sort of computer that will sign our software, which might be messy if builds are done in the cloud, like in GitHub Actions. Yes, you can plug it into some computer at office/home and run GitHub Runner there that will get the job done. But then you rely on stability of such solution and add more maintenance tasks for yourself. You can also have your own HSM, but they're expensive and still, you need to maintain it. Last option is to use cloud HSMs and that is the approach I want to discuss here.

There's also one more thing to consider, before we begin — if price difference between OV and EV is not that big anymore, maybe it's worth going for EV and make SmartScreen happy? In our case we went with the EV option, but it's not mandatory and it only affects a bit the process of "getting the certificate" and not "how you use it later", so this text is still relevant.

What I'm about to describe uses Azure Key Vault for cloud HSM and GlobalSign as Certificate Authority (CA). Why? We already use Azure through Microsoft for Startups Founders Hub. If you're in different situation, Google Cloud offers same functionality and I'd be surprised if Amazon doesn't have anything like that. As for GlobalSign — they officially support cloud HSMs and were fast to reply to all my questions. Other company I'm aware of, that also has official support, is DigiCert. It's possible that whoever you use right now will also work, but you'll have to ask them first.

Key Vault — setting up a place for certificate

Creating a new Key Vault is quite simple and, in our case, requires only a few specific things:

  • First screen ("Basics"):
    • "Pricing tier" must be set to "premium", because only this allows using HSM as a storage and it allows longer RSA keys.
    • Setting "Enable purge detection" and some non-zero "days to retain deleted vaults" is advisable. It protects the certificate from being completely deleted. Its private part lives only inside HSM, so if that's gone, you can't regenerate, or import it from backup. Those two options will allow you to recover it from deleted state if some accident happens.
  • Second screen ("Access configuration"):
    • "Permission model" should be set to "Azure role-based access control", which right now seems to be the default and recommended option.
    • The "resource access" we left blank because we want to use GitHub, which also lives in Azure, but is not treated as Azure service.
  • Third screen ("Networking"):
    • The "enable public access" must be enabled. It sounds ominous, but what it really means is "which IPs can reach the Key Vault REST API over HTTP" and "reach" does not mean "use it", for that — access rights are needed. This part took me a bit to understand, because after all we only want GitHub to talk to it. GitHub's documentation states that they do have a pool of IPs, but it can change at any time, making it challenging to provide explicit access. If you click on "selected networks" you will see option to "allow trusted Microsoft services to bypass firewall", but again GitHub runs in Azure, but is not Azure resource and according to documentation it is not one of "trusted services".
      Fun fact: I found somewhere that in case of Azure Managed Database, allowing Azure services to talk to it, also allows GitHub.

Now you have a vault capable of storing certificates inside shared HSM that will be accessible from GitHub and won't kill you with price. This option costs currently around $5/certificate/month, plus $0.15 for every 10,000 transactions (most requests made to vault REST API counts as transaction).

There is a different offering called "Dedicated HSM", which provides dedicated hardware for your exclusive use, but it's way more expensive and not needed here.

It's good to enable logging for the vault too, so you can see who is using it. That part is well documented and easy to find. In general, you:

  • Select "Diagnostic settings" in vault.
  • There you tell both "audit" and "allLgos" groups to, for example, "send to Log Analytics workspace" (you might need to create that workspace).

Then you can look at them through Analytics, or by clicking "Logs" in the vault menu itself (it's just below the "Diagnostic settings"). On top of that, you can add some Alerts to know when the vault is down, or is being used more than usually (the "custom log search" signal might be handy). Price of logging and alerting is quite low — a few $/month/GB of logs and something similar per alert (depending on alert type).

Getting the certificate

Depending on type — OV, or EV — you'll need different amount of time, paperwork and phone calls (with EV being the more demanding one). This paperwork involves printing, signing, scanning and sending it back, so have a printer and scanner/phone handy.

The whole procedure is: pay first, then we'll verify you, then we'll grant the certificate. Keep on reading to see what you should ask your CA before initiating the procedure, because maybe you need to switch the CA to a more compatible one first.

If you're a young company — there's a requirement for EV certificates, that the company must exist for at least 3 years. In case of GlobalSign, it's not a hard requirement, we just had to provide additional papers to prove that we are who we are. In case of other CAs — ask them.

Another tricky part is the "certification of used HSM", where you confirm that the HSM is compliant with FIPS 140-2 Level 2 or Common Criteria EAL 4+. In case of GlobalSign, there's a form, where you have to state which manufacturer and model of HSM you're using. Azure does mention some models that are in use, but at the same time there's no way to explicitly pick them while creating vault, also Microsoft might add more models in the future without notification. The solution here (that I got from GlobalSign support) is to put manufacturer: Microsoft, model: Azure Key Vault. I'm not sure about DigiCert, but they're both trusted Azure partners, so the procedure might be similar. In case of other CAs — you have to ask.

When you start your certificate order, in case of GlobalSign, you must pick from a list — which type of certificate you want. The "Extended Validation (EV) Code Signing (HSM)" is the one I used. The "(HSM)" part was a bit confusing — does that mean I want to use my HSM, or does that mean I want them to send me the additional USB dongle? Now I know — the "(HSM)" suffix means that I want to use HSM that I already have.

One more thing to ask your CA about is — do they require HSM attestation. It is a piece of data that provides a proof that this HSM is compliant with norms, the certificate was generated by it and that it has all the proper protections in place. In case of GlobalSign that part was not needed, possibly because of their cooperation with Azure. But if that would be required, Key Vault right now has no way of providing that data. On the other side, Google Cloud allows downloading of "attestation bundle".

Whole company vetting dance took me a bit over a week from start to certificate in vault, but there was another catch — CA vetting team wants to call you at some point, but they need your phone number to be accessible through "trusted third party", like online phone book (Yellow Pages and such). If it's not available, they provide other, much slower, means of validation. How do I know? Because there was a bit of a misunderstanding with my phone number, which in the end got resolved and phone call validation was completed, but it apparently triggered a fallback option to send a letter with secret password that has to be emailed back. The letter was sent on August 14th and got here on August 30th, so if you opt in for this way of validation — you need much more time.

How to get certificate into vault

Assuming you passed the vetting, you need to generate the certificate in vault, submit it to CA to get it signed and "put it back" into vault, so it can be used.

Before you start

Being an Azure admin, or owner, apparently doesn't give you rights to fully manage Key Vault elements, which is somewhat counterintuitive for me, but that's how it works. I'm writing this text after I completed the whole procedure, so I'm not 100% sure, but somewhere around this moment I got to the point, where I had the vault, but could not perform any operations on it, like adding new certificate, because I had "no rights". To be able to configure the vault, you need to grant yourself rights and in my case the "Key Vault Administrator" made the most sense. To do this:

  • Go to your new vault.
  • Open "Access control (IAM)".
  • "Add" — "Add role assignment".
  • Select "Key Vault Administrator" role.
  • On "Members" screen use the "User, group, or service principal" and click "select members".
  • Select yourself on the list.
  • And "Assign" on the last screen.

The right way, that we found too late

I can't say much about this approach, as we first applied for the certificate and then started to set up Key Vault when the Certificate Signing Request (CSR) was needed. If you search the internet for "integrating azure key vault with certificate authorities" you'll find official Microsoft documentation. It boils down to adding your CA credentials to Key Vault configuration (currently only GlobalSign and DigiCert are supported). Then when you go to "certificates" in Key Vault and click on "generate/import", you can select that the "Type of Certificate Authority" is "Certificate issued by an integrated CA". From there you select what you want and Key Vault should take care of ordering and managing the certificate for you.

The old school way that we used

Right now, we're at the point where CA wants a CSR generated by our Key Vault HSM. At this point it felt like using "integrated CA" option would create new order for yet another certificate and that's not what I wanted. It's possible that I'm wrong, but I went with the safe path of "certificate issued by a non-integrated CA", where everything has to be done manually. This way will also work with CAs that allows using cloud HSMs, but are not official partners. What you need to do:

  • In your vault select "certificates" and click "generate/import".
  • Settings on the main screen:
    • "Type of Certificate Authority (CA)" — "Certificate issued by a non-integrated CA".
    • "Subject" — "CN=<full company name you registered while requesting the certificate>".
    • "Validity Period" — number of months you bought the certificate for.
    • "Content Type" — PKCS #12 (at least for GlobalSign that worked fine).
    • "Lifetime action" — whatever you want, I have set the notification, but CA will also remind us on its own.
  • Now click "Not configured" link in the "Advanced Policy Configuration" and there:
    • "Extended Key Usages" are by default set to something that SSL certificate would use, I replaced them with single "1.3.6.1.5.5.7.3.3", which is the OID for "used for code signing". OIDs (Object IDentifiers) are magical numbers, they have specific meaning and there's a ton of them. If you look for "certificate OID", you'll find the full registry. In general, they specify attributes of the certificate.
    • The "Key Usage Flags" I changed to "Digital Signature" only.
    • "Reuse key on renewal" — No.
    • "Exportable private key" — No (HSM keys can't be exported and only that option will enable correct key types).
    • "Key Type" — RSA-HSM (that's only visible if you set previous option to "No").
    • "Key Size" — depends on CA, but currently minimum is 3072 and 4096 can't hurt, so just go for 4096, if your CA allows it.
    • "Enable Certificate Transparency" — No (transparency works for SSL certificates, so probably in this case it has completely no effect, no matter what you use).
    • "Certificate Type" — just leave empty.
  • Now you can create the certificate.

As a matter of fact, when CA will sign your request, they replace some fields according to what you paid for, so even leaving the default (Extended) Key Usage in previous points should not change much. That's because otherwise you could buy some cheap SSL certificate and then send request saying "I want a certificate that can sign emails, encrypt web traffic, sign binaries and also other certificates, because I want to be CA myself". Same happens with the "Subject" field. It's mandatory, so you must provide something, but CA might replace it anyway. In case of GlobalSign I provided only Common Name (CN) and they added location (L), email (E), organization (O), etc.

Now, all that's left is to:

  • Go to the new certificate and click on "Download CSR".
  • Go to your CA web page, stuff it in there and get back signed request. To do this, you must first get information from CA that the certificate is ready (so after all the vetting concludes). In case of GlobalSign they provide a magic URL, where you first must enter your "pickup password", that you initially created during certificate ordering, then you upload the CSR and download signed version.

Once you get back the signed request:

  • Go to the certificate in your vault and right next to "Download CSR" is "Merge Signed Request", which you use to upload response from CA.
  • From now on you have a valid certificate in the vault.

Using the certificate

HSM-based certificate means that you can't just have its private part stored somewhere in GitHub, the signing process must happen inside the HSM. The HSM gets hash of binary, encrypts it with private part of certificate and returns the result, which is embedded in the file.

In our case, we want to build things with GitHub Actions, so we need to grant them access to the vault. For that purpose, we'll use Azure Managed Identity with federated credentials. Then we need a tool that can perform signing with certificate that lives in Azure. The default Microsoft SignTool can't do this, but its OpenSource replacement AzureSignTool can (with a little help).

Azure Managed Identity

Looking through the internet all examples talk about adding new Application to Azure and then providing it access rights through Service Principal. They also assume that Key Vault is not using Role Based Access Control (RBAC), but older Access Policies. It still does work (assuming you don't select RBAC while creating new vault) and you're free to use it, but that's not what I was planning to do.

I wanted to use "user-assigned managed identity", because they are easier to secure (no secrets to copy-paste) and they can be attached to multiple resources if needed. Technically they're a special case of Service Principal, so you can think of them like that.

I also wanted to use the federated credentials through OpenID Connect (OIDC). This way the GitHub side would talk to Azure and just say "hi, I'm GitHub, I'm running a job for this repository, can you give me a temporary access token?" without any stored secrets that are valid for many years and are tricky to rotate.

Let's start from the identity:

  • Go to "Managed Identities".
  • Click "Create".
  • Just fill the name.
  • The important part: select region that supports "federated credentials". It doesn't matter where are resources it will need to reach, because identities from one region can be used in any other region. Just check here to see which regions are currently unsupported for federated identity credentials. Using unsupported region will allow you to create identity, but accessing credentials section will tell you that it's not supported (guess how do I know ;).

Now we need to assign it a role. According to AzureSignTool documentation, it needs following Key Vault policies:

Key: Verify, Sign, Get, List
Secret: Get, List
Certificate: Get, List

As we're using RBAC to express access rights, we need to translate it into different set of rules. The helpful resource is hidden in one of Azure tools that allows us to compare Key Vault policies with RBAC rules, to help with migration. The interesting file is located here. With its help, we can see that the AzureSignTool should need following rights:

  • Microsoft.KeyVault/vaults/keys/read
  • Microsoft.KeyVault/vaults/keys/sign/action
  • Microsoft.KeyVault/vaults/keys/verify/action
  • Microsoft.KeyVault/vaults/secrets/getSecret/action
  • Microsoft.KeyVault/vaults/secrets/readMetadata/action
  • Microsoft.KeyVault/vaults/certificates/read

There's no predefined Key Vault role with those exact rights, so we must create one. Custom roles can be created inside subscription, management group, or resource group. I went with subscription level, so it would be available anywhere. To add the custom role:

  • Open your resource group that has your vault and managed identity.
  • Open "Access Control (IAM)" on the left.
  • Click "Add" — "Custom role".
  • Name it and give some description, so you won't be wondering what its purpose is later.
  • On "Permissions" screen click "Add permissions" and in there search for "Key Vault".
  • Open the "Microsoft Key Vault" set and switch to "Data Actions". Those are the actions that allow using the data stored, in our case the certificate. All the rights mentioned above, that you need to add, will be in this group. The other "Actions" group is for control operations, like updating stored data, configuring accessibility, etc.
  • If you move now to JSON view, you should see all the rights inside the "dataActions" list.
  • Now you can just create the role.

Now you can assign that role to the managed identity you created before, to do this just:

  • Go to "Managed Identities".
  • Open the identity created for the GitHub Action.
  • Open "Azure role assignments".
  • "Add role assignment".
  • There select:
    • Scope — "Key Vault".
    • Subscription — the one you're using.
    • Resource — the vault you created for certificate.
    • Role — the custom role you just made.
  • And "Save".

The "Add role assignment" has currently "(preview)" in its name. It seems to be working fine, but its layout may change in the future. As a matter of fact, you can grant this access also from inside the vault by:

  • Going to the vault's "Access control (IAM)".
  • Selecting the new, custom role.
  • In "Members" select "Managed identity".
  • In "Select members" in "Managed identity" pick "User-assigned managed identity".
  • The created identity should appear below and can be selected.

Now we need to let GitHub use that identity:

  • Inside the new identity click "Federated credentials".
  • Then "Add Credential".
  • In the "Scenario" pick "Github Actions deploying Azure resources", which is not exactly what we'll be doing, but works.
  • The "Organization" and "Repository" you can copy out from your GitHub repository URL, which consists of https://github.com/<organization>/<repository>.
  • "Entity" — that's the fun part. All the descriptions I've found say to use Environment, but what if you're not using it (or you're on free plan, that sometimes don't even support it). From my experiments, it seems that by default GitHub sends the branch identifier in the "Subject identifier". And that's what I used. I selected "Entity" — "Branch" and "Branch" — "main". This way, only builds triggered for main will be able to use the vault. Is it bad? Maybe, if you want to test something and use signing from a different branch. But in this case, you can add another credential for another branch. A bit cumbersome but should work.

One more thing regarding the "Entity" and "Subject identifier" — there is a way to manipulate the identifier GitHub sends to get the token, there are some API calls available, although I'm not sure if they're available always, or only on paid plans and I didn't investigate them much. If you could set there, for example not to send branch name at all, then you could use the "Edit (optional)" link in the "Subject Identifier" field and say that the only thing present will be organization/repository. Then any branch can use signing. From my point of view limiting access to only main branch is not bad, so I left it at that.

At this point you have a managed identity that can be assumed by GitHub using OIDC and that has rights needed by AzureSignTool to perform signing with your certificate. You can also check it from the other side — if you go to Key Vault, open the vault with certificate and go to "Access Control (IAM)", then in "Role assignments" you should see (apart from yourself) the new managed identity and its role.

Signing the code

First, something that I learnt during this process: there's a repository describing all the current GitHub Action runner images here. What you can find there, is a list of already preinstalled software, like this one. This can potentially speed up some things. For example you can just run "az" CLI tool without custom azure/CLI action, because it's preinstalled, the same goes for "dotnet". Unfortunately, as of now, the AzureSignTool is not on the list, so we'll need to install it.

The AzureSignTool (available on GitHub here) at the time of writing this text is in version 4.0.1. It's possible that future versions will make some things simpler/different.

GitHub token

To enable the OIDC magic that allows GitHub to use the managed identity, we need to grant the workflow the rights to write to GITHUB_TOKEN: id-token. The default permissions inside the GITHUB_TOKEN are configured on the organization level, but the important thing to know, is that adding the "permissions" section to workflow will reset all the default rights to "None". You can't just set the "id-token", but you need to set all the other things you need there too. For example the code checkout step won't be able to access the repository and end up with "remote: Repository not found" error if you don't reinstate the "contents: read" (again — guess how I learnt about it — my test code signing workflow was working perfectly fine and after integrating it into main build workflow it started to fail at checkout, which was totally unexpected). Our setup is quite simple, so I just put this permissions block as the top-level key:

permissions:
    id-token: write
    contents: read
                                    

GitHub workflow

Because signing will work only for the "main" branch, I set an environment variable that's used by steps to execute conditionally. Assigned result is bool, but during checks, it becomes a string so you need to compare it as such (it took me a while to understand why all my conditions are always true). Setting the variable is done like this:

env:
  IS_RELEASE: ${{ github.ref == 'refs/heads/main' }}
                                    

The job that builds the signed binaries needs those additional steps:

- name: Install AzureSignTool
  if: env.IS_RELEASE == 'true'
  run: dotnet tool install --no-cache --global AzureSignTool --version 4.0.1
- name: Azure login
  if: env.IS_RELEASE == 'true'
  uses: azure/login@v1
  with:
    client-id: ${{ secrets.AZ_CLIENT_ID }}
    tenant-id: ${{ secrets.AZ_TENANT_ID }}
    subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Azure token
  if: env.IS_RELEASE == 'true'
  run: |
      $az_token=$(az account get-access-token --scope https://vault.azure.net/.default --query accessToken --output tsv)
       echo "::add-mask::$az_token"
       echo "AZ_TOKEN=$az_token" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
                                    

First step is rather self-explanatory. You might only want to update the tool version if there's a new one available, or remove it completely if the tool will get preinstalled in the runner image. In the second step, you can see that there are three secrets passed to the login action. You can find the needed values in various places in Azure portal, for example:

  • Go to "Managed Identities".
  • Open the identity created for GitHub.
  • Select "Properties".
  • Tenant ID and Client ID are visible in "Properties" section (they both are UUIDs).
  • Subscription ID will be visible in "Essentials" — "Id". That's the first UUID after "/subscriptions/…".

Those three things are enough for GitHub to obtain token and assume the managed identity. Just add them to your secrets.

Last step uses the "az" CLI tool to get the token and save it into environment variable, it will also enable masking for the token value, so it doesn't appear in logs. You might store it as GITHUB_OUTPUT instead if you like. This part is needed, because AzureSignTool currently can't use the managed identity directly while executing on GitHub managed worker. There is a -kvm switch available, but it doesn't work in our scenario. It probably will work, if the tool is executed on a resource (like a VM) that is controlled by you, runs inside Azure and has the correct identity assigned to it. From such VM it would be able to query internal Azure IP and obtain the access token associated with identity. GitHub VM that runs the workflow is in Azure, but we can't assign it the identity, so the whole magic won't work. At least that's my understanding of the situation, but no matter if it's right, or wrong, the result is the same — that switch does not work and we need to produce that token using "az" tool.

The token, from what I've seen, is valid for 24 hours from creation. If you're using GitHub runners, then documentation says, that the VM is discarded after being used, so the token should not leak. In case of private runners (hosted by you) the azure-login action documentation mentions the possibility of destroying the token after using it, so another task running on the same VM won't be able to steal it. They have an example step to do this, but it can be simplified to not use the azure/CLI action, so this should work:

- name: Azure logout
  run: |
        az logout
        az cache purge
        az account clear
                                    

The AzureSignTool is practically a drop-in replacement for SignTool, the difference is in parameters' prefix — they use dashes — plus you need to pass the additional Azure info. In our case it should look like this:

azuresigntool.exe sign --verbose -kvu <your_vault_URI_here> -kvc <your_certificate_name_here> -kva ${{ env.AZ_TOKEN }} -fd sha256 -tr <your_timestamping_url_here> -s <file_to_sign>
                                    

The values that you need can be found:

  • Vault URI — open the vault containing the certificate and in "Overview" there's "Vault URI". It starts with https and the vault name, you want the whole thing, including https://.
  • Certificate name — that you can see in the vault, in "Certificates" in the "Name" column.
  • Token — that's the one fetched with "az" tool. Assuming you stored it in GITHUB_ENV as AZ_TOKEN, then you don't need to change anything.
  • Timestamping URL — use whatever you were using before. If you were not — then start using it. Otherwise, you'll have to resign all the binaries once your current certificate becomes invalid for whatever reason. Timestamp countersignature extends the lifetime of your signature for as long as the timestamp authority certificate is valid. It costs nothing to use and URLs for such servers are easy to find. You can use server provided by your CA, but any other will also work. There is only a slight difference in when the certificate of given timestamp authority will expire, but it will be for example 10 years for authority A and 9 years for B. This example uses "-tr", so an RFC-3161 timestamping server is needed. It's newer and preferred way, unless you need full backwards compatibility, then use "-t" to get Authenticode timestamp, they will also work fine in current versions of Windows.

The "-s" (skip signed) option tells the tool not to replace signature in files that are already signed. Depending on what you want to achieve, you might want to remove this switch. One more thing to know — dual signing is currently not possible.

Getting back to AzureSignTool. We use Inno Setup to create installer and sign all the things inside, so our steps looks like this:

- name: Build signed installer
  if: env.IS_RELEASE == 'true'
  run: '"%programfiles(x86)%\Inno Setup 6\iscc.exe" /Ssigntool="azuresigntool.exe sign --verbose -kvu <your_vault_URI_here> -kvc <your_certificate_name_here> -kva %AZ_TOKEN% -fd sha256 -tr <your_timestamping_url_here> -s $f" installer.iss'
  shell: cmd
- name: Build unsigned installer
  if: env.IS_RELEASE == 'false'
  run: >
        "%programfiles(x86)%\Inno Setup 6\iscc.exe"
        /DNO_SIGNING
        installer.iss
  shell: cmd
                                    

For some unknown reason, when I was trying to break the "run" line in first step using the ">" (like I did in the second step), iscc.exe was going crazy. I tried a few permutations, the line in logs looked correct, but I was always ending up with "Failed to execute Sign Tool command. Error 87: The parameter is incorrect". At the same time using ">" while invoking AzureSignTool without iscc.exe works perfectly fine. In the end I gave up and stuffed everything in one, long line that just works.

You might also wonder about the "/DNO_SIGNING". That's because if you tell Inno Setup to expect a sign tool configuration and you don't pass the /S parameter, you'll end up with "Value of [Setup] section directive "SignTool" is invalid" error. The /D defines a variable and in the .iss file we have just:

#ifndef NO_SIGNING
  SignTool=signtool
#endif
                                    
Which removes the signing completely if the variable is not defined (like if we want to build from a different branch, which is not authorized to use the managed identity in Azure).

The end

And that's it. You have now a GitHub Action that uses Azure Managed Identity to access Key Vault that stores code signing certificate inside HSM. You comply with all the current CA requirements and can build signed binaries whenever you want :).

P.S. If I forgot to write about something, or you found an information that needs to be corrected — tell us about it using our contact form.

P.P.S. If you are interested in improving your defences against supply chain attacks and exploits — contact us — we're looking for pilot customers. You can watch a brief overview and demo here.