Today, I'd like to take a break from our usual supply chain attack topics and talk about something that is covered on the internet, but only if you look long enough, combine multiple sources and engage in some educated guesswork and experimentation.
NOTE:
Since 1. June 2023 the code signing certificates have to be generated and stored on Hardware Security Modules (HSMs). Before that date there was a big difference in price between OV and EV certificates. OV (Organization Validated) required simplified verification of who you are and could be delivered as a simple certificate file, nothing fancy and quite cheap (you could find something for around $90/year). The EV (Extended Validation) required more papers, extended vetting and had to be secured by HSM. As a bonus they also prevent MS SmartScreen alerts about "unrecognized application", because being signed by EV certificate grants implicit trust, while OV needs to build up reputation over time before the alert goes away. Because of that the EV certificate was always more expensive (something around $400/year).
Right now, every type of codesigning certificate has to be HSM backed and not with "any HSM", but with one that is at least FIPS 140-2 Level 2 or Common Criteria EAL 4+ compliant. So, what happened now? Prices of EV certificates stayed the same, but the cheap OV? Now they're around $300/year and that's only if you have your own HSM, if you don't, you need to pay another $100 for a USB HSM dongle that will be delivered to you. The good thing about this situation, is that the previous "we'll just send you a file" was not quite secure, because it was easier to steal and abuse such certificate. Using HSM for generation and storage is much better from this point of view (and it's the main reason behind the industry-wide move).
With the new rules a "small" problem arises — we get a USB dongle that we need to plug into some sort of computer that will sign our software, which might be messy if builds are done in the cloud, like in GitHub Actions. Yes, you can plug it into some computer at office/home and run GitHub Runner there that will get the job done. But then you rely on stability of such solution and add more maintenance tasks for yourself. You can also have your own HSM, but they're expensive and still, you need to maintain it. Last option is to use cloud HSMs and that is the approach I want to discuss here.
There's also one more thing to consider, before we begin — if price difference between OV and EV is not that big anymore, maybe it's worth going for EV and make SmartScreen happy? In our case we went with the EV option, but it's not mandatory and it only affects a bit the process of "getting the certificate" and not "how you use it later", so this text is still relevant.
What I'm about to describe uses Azure Key Vault for cloud HSM and GlobalSign as Certificate Authority (CA). Why? We already use Azure through Microsoft for Startups Founders Hub. If you're in different situation, Google Cloud offers same functionality and I'd be surprised if Amazon doesn't have anything like that. As for GlobalSign — they officially support cloud HSMs and were fast to reply to all my questions. Other company I'm aware of, that also has official support, is DigiCert. It's possible that whoever you use right now will also work, but you'll have to ask them first.
Creating a new Key Vault is quite simple and, in our case, requires only a few specific things:
Now you have a vault capable of storing certificates inside shared HSM that will be accessible from GitHub and won't kill you with price. This option costs currently around $5/certificate/month, plus $0.15 for every 10,000 transactions (most requests made to vault REST API counts as transaction).
There is a different offering called "Dedicated HSM", which provides dedicated hardware for your exclusive use, but it's way more expensive and not needed here.
It's good to enable logging for the vault too, so you can see who is using it. That part is well documented and easy to find. In general, you:
Then you can look at them through Analytics, or by clicking "Logs" in the vault menu itself (it's just below the "Diagnostic settings"). On top of that, you can add some Alerts to know when the vault is down, or is being used more than usually (the "custom log search" signal might be handy). Price of logging and alerting is quite low — a few $/month/GB of logs and something similar per alert (depending on alert type).
Depending on type — OV, or EV — you'll need different amount of time, paperwork and phone calls (with EV being the more demanding one). This paperwork involves printing, signing, scanning and sending it back, so have a printer and scanner/phone handy.
The whole procedure is: pay first, then we'll verify you, then we'll grant the certificate. Keep on reading to see what you should ask your CA before initiating the procedure, because maybe you need to switch the CA to a more compatible one first.
If you're a young company — there's a requirement for EV certificates, that the company must exist for at least 3 years. In case of GlobalSign, it's not a hard requirement, we just had to provide additional papers to prove that we are who we are. In case of other CAs — ask them.
Another tricky part is the "certification of used HSM", where you confirm that the HSM is compliant with FIPS 140-2 Level 2 or Common Criteria EAL 4+. In case of GlobalSign, there's a form, where you have to state which manufacturer and model of HSM you're using. Azure does mention some models that are in use, but at the same time there's no way to explicitly pick them while creating vault, also Microsoft might add more models in the future without notification. The solution here (that I got from GlobalSign support) is to put manufacturer: Microsoft, model: Azure Key Vault. I'm not sure about DigiCert, but they're both trusted Azure partners, so the procedure might be similar. In case of other CAs — you have to ask.
When you start your certificate order, in case of GlobalSign, you must pick from a list — which type of certificate you want. The "Extended Validation (EV) Code Signing (HSM)" is the one I used. The "(HSM)" part was a bit confusing — does that mean I want to use my HSM, or does that mean I want them to send me the additional USB dongle? Now I know — the "(HSM)" suffix means that I want to use HSM that I already have.
One more thing to ask your CA about is — do they require HSM attestation. It is a piece of data that provides a proof that this HSM is compliant with norms, the certificate was generated by it and that it has all the proper protections in place. In case of GlobalSign that part was not needed, possibly because of their cooperation with Azure. But if that would be required, Key Vault right now has no way of providing that data. On the other side, Google Cloud allows downloading of "attestation bundle".
Whole company vetting dance took me a bit over a week from start to certificate in vault, but there was another catch — CA vetting team wants to call you at some point, but they need your phone number to be accessible through "trusted third party", like online phone book (Yellow Pages and such). If it's not available, they provide other, much slower, means of validation. How do I know? Because there was a bit of a misunderstanding with my phone number, which in the end got resolved and phone call validation was completed, but it apparently triggered a fallback option to send a letter with secret password that has to be emailed back. The letter was sent on August 14th and got here on August 30th, so if you opt in for this way of validation — you need much more time.
Assuming you passed the vetting, you need to generate the certificate in vault, submit it to CA to get it signed and "put it back" into vault, so it can be used.
Being an Azure admin, or owner, apparently doesn't give you rights to fully manage Key Vault elements, which is somewhat counterintuitive for me, but that's how it works. I'm writing this text after I completed the whole procedure, so I'm not 100% sure, but somewhere around this moment I got to the point, where I had the vault, but could not perform any operations on it, like adding new certificate, because I had "no rights". To be able to configure the vault, you need to grant yourself rights and in my case the "Key Vault Administrator" made the most sense. To do this:
I can't say much about this approach, as we first applied for the certificate and then started to set up Key Vault when the Certificate Signing Request (CSR) was needed. If you search the internet for "integrating azure key vault with certificate authorities" you'll find official Microsoft documentation. It boils down to adding your CA credentials to Key Vault configuration (currently only GlobalSign and DigiCert are supported). Then when you go to "certificates" in Key Vault and click on "generate/import", you can select that the "Type of Certificate Authority" is "Certificate issued by an integrated CA". From there you select what you want and Key Vault should take care of ordering and managing the certificate for you.
Right now, we're at the point where CA wants a CSR generated by our Key Vault HSM. At this point it felt like using "integrated CA" option would create new order for yet another certificate and that's not what I wanted. It's possible that I'm wrong, but I went with the safe path of "certificate issued by a non-integrated CA", where everything has to be done manually. This way will also work with CAs that allows using cloud HSMs, but are not official partners. What you need to do:
As a matter of fact, when CA will sign your request, they replace some fields according to what you paid for, so even leaving the default (Extended) Key Usage in previous points should not change much. That's because otherwise you could buy some cheap SSL certificate and then send request saying "I want a certificate that can sign emails, encrypt web traffic, sign binaries and also other certificates, because I want to be CA myself". Same happens with the "Subject" field. It's mandatory, so you must provide something, but CA might replace it anyway. In case of GlobalSign I provided only Common Name (CN) and they added location (L), email (E), organization (O), etc.
Now, all that's left is to:
Once you get back the signed request:
HSM-based certificate means that you can't just have its private part stored somewhere in GitHub, the signing process must happen inside the HSM. The HSM gets hash of binary, encrypts it with private part of certificate and returns the result, which is embedded in the file.
In our case, we want to build things with GitHub Actions, so we need to grant them access to the vault. For that purpose, we'll use Azure Managed Identity with federated credentials. Then we need a tool that can perform signing with certificate that lives in Azure. The default Microsoft SignTool can't do this, but its OpenSource replacement AzureSignTool can (with a little help).
Looking through the internet all examples talk about adding new Application to Azure and then providing it access rights through Service Principal. They also assume that Key Vault is not using Role Based Access Control (RBAC), but older Access Policies. It still does work (assuming you don't select RBAC while creating new vault) and you're free to use it, but that's not what I was planning to do.
I wanted to use "user-assigned managed identity", because they are easier to secure (no secrets to copy-paste) and they can be attached to multiple resources if needed. Technically they're a special case of Service Principal, so you can think of them like that.
I also wanted to use the federated credentials through OpenID Connect (OIDC). This way the GitHub side would talk to Azure and just say "hi, I'm GitHub, I'm running a job for this repository, can you give me a temporary access token?" without any stored secrets that are valid for many years and are tricky to rotate.
Let's start from the identity:
Now we need to assign it a role. According to AzureSignTool documentation, it needs following Key Vault policies:
Key: | Verify, Sign, Get, List |
Secret: | Get, List |
Certificate: | Get, List |
As we're using RBAC to express access rights, we need to translate it into different set of rules. The helpful resource is hidden in one of Azure tools that allows us to compare Key Vault policies with RBAC rules, to help with migration. The interesting file is located here. With its help, we can see that the AzureSignTool should need following rights:
There's no predefined Key Vault role with those exact rights, so we must create one. Custom roles can be created inside subscription, management group, or resource group. I went with subscription level, so it would be available anywhere. To add the custom role:
Now you can assign that role to the managed identity you created before, to do this just:
The "Add role assignment" has currently "(preview)" in its name. It seems to be working fine, but its layout may change in the future. As a matter of fact, you can grant this access also from inside the vault by:
Now we need to let GitHub use that identity:
One more thing regarding the "Entity" and "Subject identifier" — there is a way to manipulate the identifier GitHub sends to get the token, there are some API calls available, although I'm not sure if they're available always, or only on paid plans and I didn't investigate them much. If you could set there, for example not to send branch name at all, then you could use the "Edit (optional)" link in the "Subject Identifier" field and say that the only thing present will be organization/repository. Then any branch can use signing. From my point of view limiting access to only main branch is not bad, so I left it at that.
At this point you have a managed identity that can be assumed by GitHub using OIDC and that has rights needed by AzureSignTool to perform signing with your certificate. You can also check it from the other side — if you go to Key Vault, open the vault with certificate and go to "Access Control (IAM)", then in "Role assignments" you should see (apart from yourself) the new managed identity and its role.
First, something that I learnt during this process: there's a repository describing all the current GitHub Action runner images here. What you can find there, is a list of already preinstalled software, like this one. This can potentially speed up some things. For example you can just run "az" CLI tool without custom azure/CLI action, because it's preinstalled, the same goes for "dotnet". Unfortunately, as of now, the AzureSignTool is not on the list, so we'll need to install it.
The AzureSignTool (available on GitHub here) at the time of writing this text is in version 4.0.1. It's possible that future versions will make some things simpler/different.
To enable the OIDC magic that allows GitHub to use the managed identity, we need to grant the workflow the rights to write to GITHUB_TOKEN: id-token. The default permissions inside the GITHUB_TOKEN are configured on the organization level, but the important thing to know, is that adding the "permissions" section to workflow will reset all the default rights to "None". You can't just set the "id-token", but you need to set all the other things you need there too. For example the code checkout step won't be able to access the repository and end up with "remote: Repository not found" error if you don't reinstate the "contents: read" (again — guess how I learnt about it — my test code signing workflow was working perfectly fine and after integrating it into main build workflow it started to fail at checkout, which was totally unexpected). Our setup is quite simple, so I just put this permissions block as the top-level key:
permissions: id-token: write contents: read
Because signing will work only for the "main" branch, I set an environment variable that's used by steps to execute conditionally. Assigned result is bool, but during checks, it becomes a string so you need to compare it as such (it took me a while to understand why all my conditions are always true). Setting the variable is done like this:
env: IS_RELEASE: ${{ github.ref == 'refs/heads/main' }}
The job that builds the signed binaries needs those additional steps:
- name: Install AzureSignTool if: env.IS_RELEASE == 'true' run: dotnet tool install --no-cache --global AzureSignTool --version 4.0.1 - name: Azure login if: env.IS_RELEASE == 'true' uses: azure/login@v1 with: client-id: ${{ secrets.AZ_CLIENT_ID }} tenant-id: ${{ secrets.AZ_TENANT_ID }} subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }} - name: Azure token if: env.IS_RELEASE == 'true' run: | $az_token=$(az account get-access-token --scope https://vault.azure.net/.default --query accessToken --output tsv) echo "::add-mask::$az_token" echo "AZ_TOKEN=$az_token" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
First step is rather self-explanatory. You might only want to update the tool version if there's a new one available, or remove it completely if the tool will get preinstalled in the runner image. In the second step, you can see that there are three secrets passed to the login action. You can find the needed values in various places in Azure portal, for example:
Those three things are enough for GitHub to obtain token and assume the managed identity. Just add them to your secrets.
Last step uses the "az" CLI tool to get the token and save it into environment variable, it will also enable masking for the token value, so it doesn't appear in logs. You might store it as GITHUB_OUTPUT instead if you like. This part is needed, because AzureSignTool currently can't use the managed identity directly while executing on GitHub managed worker. There is a -kvm switch available, but it doesn't work in our scenario. It probably will work, if the tool is executed on a resource (like a VM) that is controlled by you, runs inside Azure and has the correct identity assigned to it. From such VM it would be able to query internal Azure IP and obtain the access token associated with identity. GitHub VM that runs the workflow is in Azure, but we can't assign it the identity, so the whole magic won't work. At least that's my understanding of the situation, but no matter if it's right, or wrong, the result is the same — that switch does not work and we need to produce that token using "az" tool.
The token, from what I've seen, is valid for 24 hours from creation. If you're using GitHub runners, then documentation says, that the VM is discarded after being used, so the token should not leak. In case of private runners (hosted by you) the azure-login action documentation mentions the possibility of destroying the token after using it, so another task running on the same VM won't be able to steal it. They have an example step to do this, but it can be simplified to not use the azure/CLI action, so this should work:
- name: Azure logout run: | az logout az cache purge az account clear
The AzureSignTool is practically a drop-in replacement for SignTool, the difference is in parameters' prefix — they use dashes — plus you need to pass the additional Azure info. In our case it should look like this:
azuresigntool.exe sign --verbose -kvu <your_vault_URI_here> -kvc <your_certificate_name_here> -kva ${{ env.AZ_TOKEN }} -fd sha256 -tr <your_timestamping_url_here> -s <file_to_sign>
The values that you need can be found:
The "-s" (skip signed) option tells the tool not to replace signature in files that are already signed. Depending on what you want to achieve, you might want to remove this switch. One more thing to know — dual signing is currently not possible.
Getting back to AzureSignTool. We use Inno Setup to create installer and sign all the things inside, so our steps looks like this:
- name: Build signed installer if: env.IS_RELEASE == 'true' run: '"%programfiles(x86)%\Inno Setup 6\iscc.exe" /Ssigntool="azuresigntool.exe sign --verbose -kvu <your_vault_URI_here> -kvc <your_certificate_name_here> -kva %AZ_TOKEN% -fd sha256 -tr <your_timestamping_url_here> -s $f" installer.iss' shell: cmd - name: Build unsigned installer if: env.IS_RELEASE == 'false' run: > "%programfiles(x86)%\Inno Setup 6\iscc.exe" /DNO_SIGNING installer.iss shell: cmd
For some unknown reason, when I was trying to break the "run" line in first step using the ">" (like I did in the second step), iscc.exe was going crazy. I tried a few permutations, the line in logs looked correct, but I was always ending up with "Failed to execute Sign Tool command. Error 87: The parameter is incorrect". At the same time using ">" while invoking AzureSignTool without iscc.exe works perfectly fine. In the end I gave up and stuffed everything in one, long line that just works.
You might also wonder about the "/DNO_SIGNING". That's because if you tell Inno Setup to expect a sign tool configuration and you don't pass the /S parameter, you'll end up with "Value of [Setup] section directive "SignTool" is invalid" error. The /D defines a variable and in the .iss file we have just:
#ifndef NO_SIGNING SignTool=signtool #endifWhich removes the signing completely if the variable is not defined (like if we want to build from a different branch, which is not authorized to use the managed identity in Azure).
And that's it. You have now a GitHub Action that uses Azure Managed Identity to access Key Vault that stores code signing certificate inside HSM. You comply with all the current CA requirements and can build signed binaries whenever you want :).
P.S. If I forgot to write about something, or you found an information that needs to be corrected — tell us about it using our contact form.
P.P.S. If you are interested in improving your defences against supply chain attacks and exploits — contact us — we're looking for pilot customers. You can watch a brief overview and demo here.
© 2024 Forelens