Signing infrastructure for aarch64 #7361

Closed
opened 2018-11-07 08:47:44 +00:00 by pbrobinson · 46 comments
  • Describe what you need us to do:

There are now aarch64 systems that are shipping with secure boot enabled so we need to be able to sign all the parts of the boot path (shim/grub2/kernel) like we do on x86_64 now. To do this we need the infrastructure (HSM, smart cards etc) to be able to do this.

I'm not sure how the signing keys etc are setup, whether we already have enough smart cards etc so this is a ticket to cover all of the various HW/infrastructure components.

  • When do you need this? (YYYY/MM/DD)

Sooner the better but some what flexible.

  • When is this no longer needed or useful? (YYYY/MM/DD)

  • If we cannot complete your request, what is the impact?

There's the possibility of being unable to run on some HW due to secure boot requirements.

* Describe what you need us to do: There are now aarch64 systems that are shipping with secure boot enabled so we need to be able to sign all the parts of the boot path (shim/grub2/kernel) like we do on x86_64 now. To do this we need the infrastructure (HSM, smart cards etc) to be able to do this. I'm not sure how the signing keys etc are setup, whether we already have enough smart cards etc so this is a ticket to cover all of the various HW/infrastructure components. * When do you need this? (YYYY/MM/DD) Sooner the better but some what flexible. * When is this no longer needed or useful? (YYYY/MM/DD) * If we cannot complete your request, what is the impact? There's the possibility of being unable to run on some HW due to secure boot requirements.

Metadata Update from @bowlofeggs:

  • Issue priority set to: Waiting on Assignee (was: Needs Review)
  • Issue tagged with: request-for-resources
**Metadata Update from @bowlofeggs**: - Issue priority set to: Waiting on Assignee (was: Needs Review) - Issue tagged with: request-for-resources
Owner

So, the bkernel x86_64 boxes are using smart card readers that attach via USB.

I do not know, but I suspect the moonshot chassis has no USB to connect to, so we would need to move to mustangs for building. Do they have USB?

@smooge do you know the hardware you got for this? we should be able to check the bkernel boxes.

We will then need @pjones to prep a smart card with the needed info on it and get it to us?

So, the bkernel x86_64 boxes are using smart card readers that attach via USB. I do not know, but I suspect the moonshot chassis has no USB to connect to, so we would need to move to mustangs for building. Do they have USB? @smooge do you know the hardware you got for this? we should be able to check the bkernel boxes. We will then need @pjones to prep a smart card with the needed info on it and get it to us?
Author

Yes, the mustang HW has USB onboard, we'd probably want to get SSDs for the ones we use thought.

Yes, the mustang HW has USB onboard, we'd probably want to get SSDs for the ones we use thought.

The signing smart card fits into a USB connector like this
https://www.frys.com/product/7540686?source=google&gclid=CjwKCAiAiarfBRASEiwAw1tYv5WnXEscc6PxuHZ0f1hzVzCUP-UWpSygfm42UiI3o9Tgradf-xLXaRoCYKIQAvD_BwE

I think the SSD item you are mentioning is for a different reason? As in "We can possibly take the SSD's out of the ARM calxeda's to put in Mustangs at the next visit?" versus an SSD being used inside a mustang for signing.

The signing smart card fits into a USB connector like this https://www.frys.com/product/7540686?source=google&gclid=CjwKCAiAiarfBRASEiwAw1tYv5WnXEscc6PxuHZ0f1hzVzCUP-UWpSygfm42UiI3o9Tgradf-xLXaRoCYKIQAvD_BwE I think the SSD item you are mentioning is for a different reason? As in "We can possibly take the SSD's out of the ARM calxeda's to put in Mustangs at the next visit?" versus an SSD being used inside a mustang for signing.
Author

@smooge I understand the USB smart card. I meant SSD in the context of storage to replace the slow single HDDs currently in the mustangs to speed up builds. We could possibly use the ones in the calxeda, but I suspect they're already quite old.

@smooge I understand the USB smart card. I meant SSD in the context of storage to replace the slow single HDDs currently in the mustangs to speed up builds. We could possibly use the ones in the calxeda, but I suspect they're already quite old.

Metadata Update from @smooge:

  • Issue assigned to smooge
**Metadata Update from @smooge**: - Issue assigned to smooge

So, we can't really get more of the smart cards in question, but I've been investigating alternatives, and I think we can do this with yubikeys. Is there a strong preference in terms of form factor between yubikey and yubikey nano?

My first through is to go with the nano just because they're harder to casually remove from machines, but obviously those of yall who actually have to touch the hardware might have your own concerns either way that I'm not aware of.

So, we can't really get more of the smart cards in question, but I've been investigating alternatives, and I think we can do this with yubikeys. Is there a strong preference in terms of form factor between yubikey and yubikey nano? My first through is to go with the nano just because they're harder to casually remove from machines, but obviously those of yall who actually have to touch the hardware might have your own concerns either way that I'm not aware of.

Just another note here - I also have a preference for the nano because they don't have NFC.

Just another note here - I also have a preference for the nano because they don't have NFC.

Metadata Update from @smooge:

  • Issue tagged with: security
**Metadata Update from @smooge**: - Issue tagged with: security

I tagged this security so it gets on @puiterwijk queue

I tagged this security so it gets on @puiterwijk queue

Sorry, I replied to @pjones on IRC, and nano's would be perfect.

Sorry, I replied to @pjones on IRC, and nano's would be perfect.
Owner

Metadata Update from @kevin:

  • Issue tagged with: backlog
**Metadata Update from @kevin**: - Issue tagged with: backlog
Author

So I think the latest update for this is that @pjones has provided the new signing HW to @kevin and it's awaiting a DC visit. I believe that HW should work with the new Lenovo aarch64 HW?

So I think the latest update for this is that @pjones has provided the new signing HW to @kevin and it's awaiting a DC visit. I believe that HW should work with the new Lenovo aarch64 HW?

Next physical visit looks to be in June 2020 after we move the hardware to a new location.

Next physical visit looks to be in June 2020 after we move the hardware to a new location.

Metadata Update from @smooge:

  • Assignee reset
**Metadata Update from @smooge**: - Assignee reset
Author

@kevin @smooge can we please make sure this is on the list for the DC visit please.

@kevin @smooge can we please make sure this is on the list for the DC visit please.

I can do so, but I will need a non NFC yubikey or whatever hardware is expected. [The newer yubikeys seem to come with NFC and other wireless doogads]

I can do so, but I will need a non NFC yubikey or whatever hardware is expected. [The newer yubikeys seem to come with NFC and other wireless doogads]
Owner

A few things:

  • We are not sure when/if the next DC visit will be. Due to COVID19, we are attempting to do as much as we possibly can remotely. There may not be a DC visit for a long time.

  • I talked with @pjones the other day and he indicated there might be some new setup for this and asked me to not deploy the yubikeys that I have yet (or possibly at all).

So, lets actually come up with a plan here. First, I guess we will to ask @pjones (here or, if you prefer email ?) what the signing setup will be like, and then we need to figure out how we can implement that without actually having a site visit (or at least in case we don't have any anytime soon).

A few things: * We are not sure when/if the next DC visit will be. Due to COVID19, we are attempting to do as much as we possibly can remotely. There may not be a DC visit for a long time. * I talked with @pjones the other day and he indicated there might be some new setup for this and asked me to not deploy the yubikeys that I have yet (or possibly at all). So, lets actually come up with a plan here. First, I guess we will to ask @pjones (here or, if you prefer email ?) what the signing setup will be like, and then we need to figure out how we can implement that without actually having a site visit (or at least in case we don't have any anytime soon).
Author

So some details from pjones just for reference.

might need more than one signing key on there, and I haven't gotten far enough yet to know if I can do that with the devices I gave nirik
probably if that happens I'll move the signing keys to softhsm with its db living on a luks device and use the hardware key to unlock the luks device.
I'm assuming those machines don't have tpms, but also using them may be more trouble than it's worth

So some details from pjones just for reference. <pjones> might need more than one signing key on there, and I haven't gotten far enough yet to know if I can do that with the devices I gave nirik <pjones> *probably* if that happens I'll move the signing keys to softhsm with its db living on a luks device and use the hardware key to unlock the luks device. <pjones> I'm assuming those machines don't have tpms, but also using them may be more trouble than it's worth
Owner

The x86 builders do have tpm I think: [Mon Mar 23 14:20:13 2020] tpm_tis MSFT0101:00: 2.0 TPM (device-id 0xFE, rev-id 2)

No idea on the aarch64 ones.

The x86 builders do have tpm I think: [Mon Mar 23 14:20:13 2020] tpm_tis MSFT0101:00: 2.0 TPM (device-id 0xFE, rev-id 2) No idea on the aarch64 ones.

Metadata Update from @smooge:

  • Issue tagged with: high-gain, high-trouble, ops
**Metadata Update from @smooge**: - Issue tagged with: high-gain, high-trouble, ops
Author

The x86 builders do have tpm I think: [Mon Mar 23 14:20:13 2020] tpm_tis MSFT0101:00: 2.0 TPM (device-id 0xFE, rev-id 2)

No idea on the aarch64 ones.

"ls /dev/tpm*" would tell.

Can we have a status update on this in general. We now have aarch64 HW which supports secure boot, and the DC move is long over. It would be really useful to get this issue fixed.

> The x86 builders do have tpm I think: [Mon Mar 23 14:20:13 2020] tpm_tis MSFT0101:00: 2.0 TPM (device-id 0xFE, rev-id 2) > > No idea on the aarch64 ones. "ls /dev/tpm*" would tell. Can we have a status update on this in general. We now have aarch64 HW which supports secure boot, and the DC move is long over. It would be really useful to get this issue fixed.
Owner

From the infra side we are ready to move this forward again anytime.

We need to know from @pjones what the plan is for what hardware we need/how to get it to the DC, etc.

From the infra side we are ready to move this forward again anytime. We need to know from @pjones what the plan is for what hardware we need/how to get it to the DC, etc.
Owner

Mailed pjones about this to come up with a plan.

Mailed pjones about this to come up with a plan.
Owner

No answer, will try and catch him on irc.

No answer, will try and catch him on irc.
Owner

Metadata Update from @zlopez:

  • Issue priority set to: Waiting on External (was: Waiting on Assignee)
**Metadata Update from @zlopez**: - Issue priority set to: Waiting on External (was: Waiting on Assignee)
Owner

PM'ed @pjones about this. Perhaps this is the year we will get this done. ;)

PM'ed @pjones about this. Perhaps this is the year we will get this done. ;)
Owner

Metadata Update from @kevin:

  • Issue priority set to: Waiting on Assignee (was: Waiting on External)
**Metadata Update from @kevin**: - Issue priority set to: Waiting on Assignee (was: Waiting on External)
Author

I was actually speaking with him about this earlier in the week and asked him to update details here.

I was actually speaking with him about this earlier in the week and asked him to update details here.
Owner

ok. Pinging again here. ;)

Perhaps @rharwood or @javierm might have some news?

Trivia: The email of @pbrobinson asking for this is the very oldeest email in my work inbox. :)

ok. Pinging again here. ;) Perhaps @rharwood or @javierm might have some news? Trivia: The email of @pbrobinson asking for this is the very oldeest email in my work inbox. :)

I can't speak to this issue specifically, but: we'll be swamped probably until at least May, so unfortunately probably no movement before then.

I can't speak to this issue specifically, but: we'll be swamped probably until at least May, so unfortunately probably no movement before then.
Owner

Metadata Update from @kevin:

  • Issue tagged with: blocked
**Metadata Update from @kevin**: - Issue tagged with: blocked

I think the keys mentioned previously are either YubiKey 5 Nano or YubiKey 5C Nano.

I think the keys mentioned previously are either [YubiKey 5 Nano](https://www.yubico.com/product/yubikey-5-nano/) or [YubiKey 5C Nano](https://www.yubico.com/product/yubikey-5c-nano/).
Owner

[backlog refinement]
We still don't have the needed hardware for this

[backlog refinement] We still don't have the needed hardware for this
Owner

[backlog refinement]
We still don't have the needed hardware for this

[backlog refinement] We still don't have the needed hardware for this
Owner

So, this has been a long road, but hopefully some end is near.

We recently upgraded to sigul-12, which supports pesigning. We want to move signing over to that from the existing x86_64 only smartcards.

The last bit we need to make this work is to figure out how we want to hook sigul into the process and how to migrate to it.

Currently:

packages needing secure boot signing are set in the koji hub policy to go to specific builders. Those builders only do these builds and have smart card hardware in them and pesignd running on them to access those cards. The socket to access pesignd is then exported into the builds mock chroot and it knows if it can see that socket it should call pesign to sign things.

I guess my first thought would be that we could just add code to pesign to call sigul instead of using the smart card, but I am not sure if they want to take this sort of change.

If not, we would need something that looks like pesignd to do this.

The sigul setup just uses sigul command line client and on the server
calls pesign to do the actual signing. It's not got any smart card
interface that I can see.

After that part is figured out, we need to figure out how to migrate over to the new setup. Do we need to export/import from the smart cards? Do we need new certs? Do we sign systemd-boot differently from kernel?

I'm going to also mail this to folks involved (pesign maintainers, etc).

So, this has been a long road, but hopefully some end is near. We recently upgraded to sigul-12, which supports pesigning. We want to move signing over to that from the existing x86_64 only smartcards. The last bit we need to make this work is to figure out how we want to hook sigul into the process and how to migrate to it. Currently: packages needing secure boot signing are set in the koji hub policy to go to specific builders. Those builders only do these builds and have smart card hardware in them and pesignd running on them to access those cards. The socket to access pesignd is then exported into the builds mock chroot and it knows if it can see that socket it should call pesign to sign things. I guess my first thought would be that we could just add code to pesign to call sigul instead of using the smart card, but I am not sure if they want to take this sort of change. If not, we would need something that looks like pesignd to do this. The sigul setup just uses sigul command line client and on the server calls pesign to do the actual signing. It's not got any smart card interface that I can see. After that part is figured out, we need to figure out how to migrate over to the new setup. Do we need to export/import from the smart cards? Do we need new certs? Do we sign systemd-boot differently from kernel? I'm going to also mail this to folks involved (pesign maintainers, etc).

Hey folks,

I just wanted to provide a bit of an update here.

I've written an alternate implementation of the pesign daemon that, instead of signing itself, forwards the request to sigul. I've also implemented a sigul client that doesn't depend on the orphaned python-nss package (so this can be used in Fedora 42+), and I've got a patch to add support to the sigul server to sign using pkcs11 so smartcards/HSMs can be used.

The pesign daemon and sigul client replacement are at https://github.com/fedora-infra/siguldry. A fork of Sigul with all the patches I believe are currently running in production, plus the pkcs11 support, is at https://pagure.io/fork/jcline/sigul/commits/main.

The siguldry CI ensures signing using pesign-client works with the replacement daemon, and sets up a complete Sigul server with SoftHSM for the signing key so I'm reasonably confident all this works (although there may be bugs yet lurking, of course). I'm working on getting things packaged in https://bugzilla.redhat.com/show_bug.cgi?id=2352566 and https://bugzilla.redhat.com/show_bug.cgi?id=2343280.

All that to say, I believe the software side of this is largely sorted out (although feedback to the contrary is welcome). My understanding is that the hardware side of things might also be sorted out during the upcoming data center move, so after that all that's left is to deploy the things.

Hey folks, I just wanted to provide a bit of an update here. I've written an alternate implementation of the pesign daemon that, instead of signing itself, forwards the request to sigul. I've also implemented a sigul client that doesn't depend on the orphaned python-nss package (so this can be used in Fedora 42+), and I've got a patch to add support to the sigul server to sign using pkcs11 so smartcards/HSMs can be used. The pesign daemon and sigul client replacement are at https://github.com/fedora-infra/siguldry. A fork of Sigul with all the patches I believe are currently running in production, plus the pkcs11 support, is at https://pagure.io/fork/jcline/sigul/commits/main. The siguldry CI ensures signing using `pesign-client` works with the replacement daemon, and sets up a complete Sigul server with SoftHSM for the signing key so I'm reasonably confident all this works (although there may be bugs yet lurking, of course). I'm working on getting things packaged in https://bugzilla.redhat.com/show_bug.cgi?id=2352566 and https://bugzilla.redhat.com/show_bug.cgi?id=2343280. All that to say, I believe the software side of this is largely sorted out (although feedback to the contrary is welcome). My understanding is that the hardware side of things might also be sorted out during the upcoming data center move, so after that all that's left is to deploy the things.
Owner

Awesome!

We are getting some HSMs in the new datacenter (hopefully they will arrive later this month). Once those are there and we have access to them, I'll look at setting them up and we can test things out.
Then, when we switch to the new datacenter we can start using them. Time is going to be short, but hopefully getting things in place will be possible.

Awesome! We are getting some HSMs in the new datacenter (hopefully they will arrive later this month). Once those are there and we have access to them, I'll look at setting them up and we can test things out. Then, when we switch to the new datacenter we can start using them. Time is going to be short, but hopefully getting things in place will be possible.
Owner

So, oldest ticket in the world... an update. ;)

We moved datacenters, and we have things in place to move signing over to sigul. I have successfully signed things with it's pipeline.
We just need to setup the bridge and adjust the build process to use it.

But also, there were some issues with the kernel side on aarch64 signing... I am not sure the status of those, but we will work on getting x86_64 over to the new setup and then adding in aarch64 when it's ready should be super easy.

So, oldest ticket in the world... an update. ;) We moved datacenters, and we have things in place to move signing over to sigul. I have successfully signed things with it's pipeline. We just need to setup the bridge and adjust the build process to use it. But also, there were some issues with the kernel side on aarch64 signing... I am not sure the status of those, but we will work on getting x86_64 over to the new setup and then adding in aarch64 when it's ready should be super easy.

But also, there were some issues with the kernel side on aarch64 signing... I am not sure the status of those, but we will work on getting x86_64 over to the new setup and then adding in aarch64 when it's ready should be super easy.

There's https://gitlab.com/cki-project/kernel-ark/-/merge_requests/3862 to ensure the aarch64 kernels enforce the lockdown LSM. We definitely don't want to sign anything before this is in.

The other kernel issue (that I know about) is with kexec. I need to go back and double check everything to be sure, but I believe the issue is simply that kexec wouldn't work with Secure Boot on, and if that's the case we could start signing before that gets fixed since it already doesn't work with Secure Boot.

> But also, there were some issues with the kernel side on aarch64 signing... I am not sure the status of those, but we will work on getting x86_64 over to the new setup and then adding in aarch64 when it's ready should be super easy. There's https://gitlab.com/cki-project/kernel-ark/-/merge_requests/3862 to ensure the aarch64 kernels enforce the lockdown LSM. We definitely don't want to sign anything before this is in. The other kernel issue (that I know about) is with kexec. I need to go back and double check everything to be sure, but I _believe_ the issue is simply that kexec wouldn't work with Secure Boot on, and if that's the case we could start signing before that gets fixed since it already doesn't work with Secure Boot.

On Thu, Jul 17, 2025 at 2:20=E2=80=AFPM Jeremy Cline pagure@pagure.io wro=
te:

The other kernel issue (that I know about) is with kexec. I need to go ba=
ck and double check everything to be sure, but I believe the issue is sim=
ply that kexec wouldn't work with Secure Boot on, and if that's the case we=
could start signing before that gets fixed since it already doesn't work w=
ith Secure Boot.

That's my understanding as well - it should be a safe failure for us.

On Thu, Jul 17, 2025 at 2:20=E2=80=AFPM Jeremy Cline <pagure@pagure.io> wro= te: > The other kernel issue (that I know about) is with kexec. I need to go ba= ck and double check everything to be sure, but I _believe_ the issue is sim= ply that kexec wouldn't work with Secure Boot on, and if that's the case we= could start signing before that gets fixed since it already doesn't work w= ith Secure Boot. That's my understanding as well - it should be a safe failure for us.
Owner

Issue tagged with: sprint-0

Issue tagged with: sprint-0
Owner

Metadata Update from @kevin:

  • Issue assigned to kevin
**Metadata Update from @kevin**: - Issue assigned to kevin
kevin self-assigned this 2026-01-26 15:35:27 +00:00
Owner

So, overdue for an update here.

I have been working on this in recent weeks and have made a good deal of progress.

  • siguldry-pesign-bridge is configured in ansible
  • siguldry-pesign-bridge is deployed to buildhw-x86-02 and buildhw-a64-02 (which are currently both disabled in koji so we can test).
  • Both machines now can sign pe binaries via the entire flow (pesign-client -> siguldry-pesign-bridge -> sigul bridge -> sigul vault -> certs on hardware )

Remaining issues:

  • on aarch64, some artifacts seem to get a corruption/warning/bad placement of cert. @jcline is looking into that
  • The aarch64 signed grub doesn't seem to work in my testing. Or perhaps it's shim not working, it's not very clear. I've asked @pjones to take a look at that and see if he can see what we are doing wrong.
  • I still need to write up a sop/docs (which will include what needs to be done when these machines are rebooted to reload encrypted secrets)
  • We need to test (using rawhide grub/kernel) more widely once we get the above all solved.

There's a light at the end of the tunnel. Hopefully not a train!

So, overdue for an update here. I have been working on this in recent weeks and have made a good deal of progress. * siguldry-pesign-bridge is configured in ansible * siguldry-pesign-bridge is deployed to buildhw-x86-02 and buildhw-a64-02 (which are currently both disabled in koji so we can test). * Both machines now can sign pe binaries via the entire flow (pesign-client -> siguldry-pesign-bridge -> sigul bridge -> sigul vault -> certs on hardware ) Remaining issues: * on aarch64, some artifacts seem to get a corruption/warning/bad placement of cert. @jcline is looking into that * The aarch64 signed grub doesn't seem to work in my testing. Or perhaps it's shim not working, it's not very clear. I've asked @pjones to take a look at that and see if he can see what we are doing wrong. * I still need to write up a sop/docs (which will include what needs to be done when these machines are rebooted to reload encrypted secrets) * We need to test (using rawhide grub/kernel) more widely once we get the above all solved. There's a light at the end of the tunnel. Hopefully not a train!
Owner

We now have x86_64 using the new pipeline and working fine.

I have some test aarch64 signed grub/kernel files, but they don't correctly boot on the two devices I have here that they should work on. They do appear to work in azure vm's. We are trying to isolate whats going on. :(

As soon as we sort that out we can enable the aarch64 side and close this (finally!)

We now have x86_64 using the new pipeline and working fine. I have some test aarch64 signed grub/kernel files, but they don't correctly boot on the two devices I have here that they should work on. They do appear to work in azure vm's. We are trying to isolate whats going on. :( As soon as we sort that out we can enable the aarch64 side and close this (finally!)
Owner

An update for today:

We have aarch64 grub2 builds that are signed built in koji/in rawhide.
We have a pull request to enable aarch64 signing for kernel: https://gitlab.com/cki-project/kernel-ark/-/merge_requests/4415

Once thats merged and kernel is built, end to end testing should be possible for everyone.
Interested folks could test grub2 now.

Still doesn't work here on my slim7x or my orion o6. ;( But everyone else has things working, so it's likely I am just unlucky and both these devices have firmware bugs.

I guess I'll go ahead and close this 7 year old ticket now. ;)

Thanks for the journey everyone.

An update for today: We have aarch64 grub2 builds that are signed built in koji/in rawhide. We have a pull request to enable aarch64 signing for kernel: https://gitlab.com/cki-project/kernel-ark/-/merge_requests/4415 Once thats merged and kernel is built, end to end testing should be possible for everyone. Interested folks could test grub2 now. Still doesn't work here on my slim7x or my orion o6. ;( But everyone else has things working, so it's likely I am just unlucky and both these devices have firmware bugs. I guess I'll go ahead and close this 7 year old ticket now. ;) Thanks for the journey everyone.
kevin closed this issue 2026-03-23 18:19:29 +00:00
Sign in to join this conversation.
No milestone
No assignees
10 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
infra/tickets#7361
No description provided.