https://github.com/w3c/webauthn/issues/2038
They apparently came up with a fix for this using something called Signals API but I don't think any browser implemented that yet.
Just wanted to highlight that this part of the UX is hairy and hard to get right
webauthn is the only spec born like a 60 yr old legacy technology with global adoption. everything about it is insane.
they didn't even think about having more than one key plugged in (mostly because the use case was just so that the device own your identity so they never thought the use would have control over the hardware), and the solution is to just blink all the keys and use the first one the use touches while hoping the other keys with timeout before the use actually have to use them. so much insanity.
We changed it to blink all keys, so that if you tap the wrong one, the browser can at least tell you something sensible and get you unstuck. This wasn’t a hypothetical shot in the dark, but something we tested and actually worked well for real users.
I don’t disagree that WebAuthn has grown well beyond anything we could call good spec design. But it’s worth remembering that there’s /a lot/ of context behind it, and that the average user doesn’t behave anything like an average HN reader.
> hoping the other keys with timeout before the use actually have to use them
Both Chrome and Android will cancel requests to all other keys. If your keys are locking up until a timeout it’s more likely the key itself is buggy.
that's just the one i hit yesterday. Making software do crazy things "for the uneducated user" is a sure way to alienate both.
but anyway. what bothers me most about passkey is that tomorrow someone will realize requiring passkeys trhu google and apple only, cuts spam more than captcha. it will happen and everyone knows it.
Given the requirement for discoverable credentials and sync, truly open/independent passkey implementations seem impossible/impractical. For example, you couldn't just have a set of Trezor-style devices that you load with the same seed and use that as your passkey without syncing the "discoverable" part of the credentials through some kind of cloud service. (The cloud service wouldn't need to be trusted with the actual keys, but you couldn't operate without it.)
As a result, it looks like you can essentially choose which ecosystem you want to lock yourself into...
With authenticatorAttachment, sites have been given a convenient foot-gun to make sure no single setup actually works for all sites, and with both the discoverable and non-discoverable credentials supported, inconsistency in the login flow for maximum confusion is guaranteed.
Add to it that this is like the 4th or 5th iteration of a standard in the field in about 10 years, and there's endless opportunity to get locked out because providers migrated from one standard (or buggy implementation) to another, or start setting things up only for the 6th standard to obsolete what you had (again, potentially locking you out).
And then people are surprised that users stick with passwords.
Sure, not having to type your username is nice, but I'll gladly still do that if it allows "passphrase-based paper-restore-able authenticators" such as the one you describe. (I have one of these, in fact!)
Many services I use that do support WebAuthN allow either variant to be used (i.e. they'll prefer discoverable credentials but will work just fine with non-discoverable ones), and arguably that should be how almost everybody ought to implement it.
Unfortunately, at least as many other services completely botch it, e.g. by making discoverable credentials mandatory, by allowlisting browsers (e.g. Paypal), allowlisting authenticators (e.g. my government's e-signature platform), or by using them in a functionally braindead way (e.g. Amazon, who for completely unfathomable reasons still requires TOTP behind WebAuthn, i.e. they replace the password with it, not the second factor).
So far I haven't noticed a strong trend towards enforcing discoverable credentials, but let's please name and shame everybody doing that. It's completely unnecessary.
The problem with "many" is that unless it's 100% of the ones someone cares about, the solution can't really solve the problem, adding an additional pain in the ass and making it easier to just stick with passwords.
Sometimes, a lack of choices is a feature. (Compare e.g.: IPSec vs. Wireguard).
> Sometimes, a lack of choices is a feature. (Compare e.g.: IPSec vs. Wireguard).
HTTP vs. HTTPS seems like a more appropriate comparison in this context. Passwords and OTPs are really, really phishable.
The workaround now isn't using passkeys, something few people understand. Instead most seem to be migrating to an external password managers. Honestly, I don't have many arguments against this as these at least generate save passwords. There are many advantages to this approach.
I believe moving forward, sticking to passwords might indeed be more viable. I think explaining users to upload their public ssl key is safer and more universal at this point.
If you don’t offer password as method I will not use your service. The worst are those that only offer code via email/sms or social login - miss me with that …
sadly the world became too dumbly complacent to question their devices.
Would you argue that loading a public key (load it where, actually?) is much faster? How'd you do it practically?
yes, when you get your phone stolen in a trip and can't log into anything.
or when you realize nobody cares for the 5 nerds using those and require an apple or google passkey.
IMHO that and a TOTP seems to be a sweet spot.
I'll still take them over SMS-OTP any day, but admittedly even that at least offers some technical benefits over TOTP, e.g. in that the relying party can tell me what I am consenting to in the message ("by entering this code, you approve a payment of $1000 to evilshop.com").
2 factor authentication using 2 simple mechanisms is great.
Password for most cases. And then in high value things, ask me for 2FA. For things like banks and anything money related, SMS 2FA already exists and is good enough. For normal websites, uncommon yet important actions, such as logging in (everyone can use long lived sessions these days), repo deletion on GitHub, etc, ask for me for 2FA.
TOTP is also a really nice mechanism, especially in authenticator apps today that can backup your keys to cloud storage.
I know "SMS" and "backup keys to cloud storage" gets the security folks off their chairs, but outside a theoretical setting they're both a perfectly good tradeoff.
> Passwords are rubbish.
Hard, hard disagree. They’re really not. Password reuse is rubbish. Passwords human beings can remember are rubbish. But a secure password — i.e., a random value with 128 bits of entropy (such as a random 28-letter string) known only to the two parties to an authentication — is not rubbish.
There is the very minimum amount of protocol necessary: one party asks for it; the other party provides it.
The end user can pick his own software to manage his passwords, or none at all (a piece of paper in a wallet is remarkably secure) and the relying party to has no ability to approve or disapprove.
I do agree that WebAuthn offers very real improvements over passwords (principally due to no longer being a shared secret), but it makes things worse for the users in a few ways. For one, the ability of relying parties to blacklist or whitelist authenticators tramples on the user’s freedom to use the software he wants. Attestation keys and enterprise attestation are user-hostile: users and servers are no longer equal parties.
And finally, the user experience of passkeys with, say, a phone-based authenticator is miserable: one must interrupt one’s computer usage, pick up the phone, unlock the phone, open the notification and unlock the app, then put the phone down.
All in all, while WebAuthn does offer real advantages, I am concerned by how it reduces users to mere consumers, digital serfs to their technological overlords.
No, they're still rubbish. Even if you make them 256 bit, passwords are bearer tokens which are reused across multiple authentications, which makes them replayable (if intercepted on the client, in transit, or server-side), phishable, social engineerable etc.
> There is the very minimum amount of protocol necessary: one party asks for it; the other party provides it.
And that's unfortunately too little protocol to be secure for repeated authentications.
> [...] principally due to no longer being a shared secret [...]
No, that's not the most important part of WebAuthN. You could get most of the benefits, i.e. phishing and social engineering resistance, from running it as a symmetric encryption protocol as well. Asymmetric keys "only" make server-side storage less sensitive (in the same way that hashing does for regular passwords).
> The end user can pick his own software to manage his passwords, or none at all (a piece of paper in a wallet is remarkably secure) and the relying party to has no ability to approve or disapprove.
The same is true for WebAuthN! (The only counterpoint here is attestation, but that is no longer a thing ever since Apple and Google introduced cloud synchronization for their credentials.) The difference is that you now need at least some software, because the calculations are too difficult to do on pen and paper.
> I am concerned by how it reduces users to mere consumers, digital serfs to their technological overlords.
Then... just don't do that! There are several open source implementations FIDO for you to choose from at this point.
Attestation was most definitely removed from Apple's implementation.
For Google, there's still a (relatively obscure) way to get a non-synchronizing/non-discoverable credential (which is then by definition not a passkey!), which then supports attestation, but that's Chrome+Android specific and wouldn't work on e.g. Chrome on Windows or macOS.
They basically had two choices once they did introduce synchronization: Keep attestations around, but specifically mark synchronized credentials as "not strongly device-bound" (and risk existing relying parties not looking for that flag and drawing incorrect conclusions from receiving such an attestation statement), or get rid of it entirely.
I suspect that they opted for the latter mostly because it would require a lot of work with the FIDO and WebAuthN working groups to introduce that mechanism, not out of a selfless desire to avoid a future "big tech lock-in" (where everybody allows exactly Apple and Google passkeys, but nothing else), but I could definitely see the latter consideration also playing a role.
Passwords have problems, but less than putting all authentication secrets in a single basket or ecosystem is (which is what big tech fundamentally wants).
Passkeys are a solution to a manufactured problem, and keeps getting pushed because it is a useful big tech honey trap that solidifies their user's captivity in their ecosystems.
KeePassXC has support. Many people use Vaultwarden. And so on.
Also, end users are already locked into Chrome and Safari (and Meta's webview and even worse fates).
Passkeys right now has upsides and downsides, like all technology.
I think they are both too complex/clunky on the data/spec/API side, and not complex enough on the UX/lifecycle side. But likely both will evolve based on the usage patterns that get solidified.
It doesn’t matter if other authenticators could work if a relying party refuses to allow its users to use them.
> Also, end users are already locked into Chrome and Safari …
Not this end user; I am typing this in Firefox right now. Not coincidentally, WebAuthn is yet another bit of complexity making it slightly more difficult to implement a browser. From the perspective of the big tech companies, end users aren’t expected to write software, or to run anything the big tech companies haven vetted.
You keep repeating that, but that's not possible anymore, since both Apple and Google removed attestation from their respective passkey/WebAuthN implementations.
For details, see https://news.ycombinator.com/item?id=42522490.
Disclaimer; I work in security so my opinions are informed by actually knowing what I'm talking about.
We have witnessed the user capturing playbook of big tech for decades at this point. Ignoring what they are doing at this point is naive at best, malice at worst.
I didn't argue big tech isn't doing user capture. I pointed out webauthn is a standard and does not necessitate getting into bed with "big tech".
Attestation enables a relying party to deny users the right of using their own software or devices. That hands over control.
There are still lots of problems with passkeys, but it's worth staying up to date if you want to contribute to that discussion.
It would be great if you’re correct, but these references sure seem to indicate that attestation is still a thing.
Microsoft, November 2024: https://learn.microsoft.com/en-us/entra/identity/authenticat...
Yubico: https://developers.yubico.com/Passkeys/Passkey_relying_party...
Apple: https://developer.apple.com/documentation/devicemanagement/s...
Apple: https://support.apple.com/guide/deployment/managed-device-at...
Google, September 2024: https://android-developers.googleblog.com/2024/09/attestatio...
A Tour of WebAuthn, December 2024 (aka the fine article): https://www.imperialviolet.org/tourofwebauthn/tourofwebauthn...
TIL that Apple still supports attestation for MDMed devices, but MDM means corporate/enterprise managed devices, not regular iPhones and Macs. (I also suspect that these would be non-synchronized in the same way that Google does it.)
Yubico and other "key form factor" authenticators indeed do still offer it, which is why I only mentioned Apple and Google.
So my point stands: Passkeys as implemented by Apple and Google don't support attestation. TFA also does not contradict this.
And how would they? Attestation semantically certifies that a given key will never leave secure embedded hardware; passkeys are intentionally cloud-synchronized and users can replicate them to an unlimited number of devices.
WebAuthn? No, thanks.
It’s one more brick in the wall preventing general-purpose computing. Want to authenticate to Banana Computers? Well, you have to use one of their oDevices, because they will not let you use a RoboPhone to store your passkeys.
And since any solution excluding either of these is a non-starter, ironically the passkey push has made WebAuthN more open when it comes to client choice.
So while I agree that Apple and Google not allowing passkey exports (yet; I am cautiously optimistic that they'll eventually be pushed to offer that too) runs the risk of locking in non-sophisticated users, the future is looking very bright for everybody posting here at least.
Show me a widely available service that filters authenticators based on attestation attributes?
But it's also great advertising against WebAuthn. Hard to believe that this kind of complexity is needed, but as with OpenID Connect it feels like enterprise interests are running the ship, not end-users. Ease of implementation seems like a non-goal.
Much of the specs were created behind closed doors and never done in a way where we could have had outside input. They're completely corporate driven and designed to control users not empower them.
https://lists.w3.org/Archives/Public/public-webauthn/
(nb. I'm not saying the folks were easy to work with or super open to discussion, but it was not some clandestine black kitchen where it was cooked up.)
But I agree that one thing you can't accuse them of is not operating in the open. While I don't agree with some of their decisions, discussing feedback in Github issues as well as on public mailing lists is probably as transparent as it gets.
DARPA was defense money, Xerox PARC was corporate money. The one big success I can quickly name that's "pure" is the web from CERN. (Okay I looked up, SMTP, RFC 821 from 1982 submitted by Jon Postel from ISI USC. But emails with the familiar @ were invented at a for-profit company by Ray Tomlinson more than a decade earlier.)
I'm not saying we should just slump into apathy, I'm just trying to point out that many mostly good things came from big corps. (And the usual problem is that they still hold the keys to the kingdom. For example see how hard it is to send mail to MS hosted email inboxes. And of course they hide behind "oh our users choose this aggressive level of filtering".)
I personally tried to stay apprised of passkey's development. After asking several developers and poking around the best I could, I was told several times that it was being primarily developed behind closed doors for corporate interests, invite-only, and wasn't ready for release. The only information available was the WebAuthn forums.
Even now the documentation is still poor, and there's essentially no rationale to understand design and architectural decisions. We're just given a spec and expected to adhere to it.
Saying that "WebAuthn was much more public, but passkey was not." shows that you don't really have a clear and accurate mental model of what passkeys are. Maybe TFA might help?
I'll need to do a write-up for it.
Adding number matching or similar helps ensure that the same user is initiating the session as is approving it - an issue when people discovered that Microsoft (among others) would do push messages to authenticate a login, and that users (if spammed late at night with constant requests), would often eventually hit allow to stop the notifications.
The attacker just has to spam them a few dozen times to get the victim to pick the right one at random and let the attacker in.
This is why it's switched on good platforms to "type in the number you see", which mitigated this.
The big advantage of WebAuthN is that (at least for sane implementations, including all I've seen) there just is no way to enter an attacker-provided number and/or supply a displayed code to an attacker.
I like to use regular ol' cURL when testing out API endpoints, and it would be great if there were some kind of dummy CLI program that I could use to generate the WebAuthn key agreements and materials.
Also, I hit CTRL-F on this post for the term "portable", and I got zero hits. Both passwords and SSH keys are trivially portable. Not so much with WebAuthn passkeys.
This is my fundamental problem with passkeys: I don't want to use any syncing service.
To be clear, I don't want to deprive other people of the ability to sync their credentials; I simply want to opt out myself. I just want to be able to manually back up and restore my credentials, like I've always done with passwords, but the passkey vendors seem to want to refuse to give anyone this ability. The vendors claim that this is to make phishing impossible, but I abhor paternalism in all forms, and also it's suspicious that this paternalism forces people to use the syncing systems of the passkey vendors, which are usually paid subscriptions. So passkeys become an endless supply of money for the vendors.
It's very telling that passkeys were designed and shipped without any export/import mechanism. You can plainly see the priority of the passkey vendors, which is to lock you in. Allegedly, export/import is coming sometime in the future, but I strongly suspect that they'll end up with some kind of "approved provider" system so that the big passkey vendors can retain absolute control and avoid giving power to the users.
I am also slightly paranoid as a security engineer, and admit that whole heartedly.
I wonder if there would be a way for vaultwarden to wrap passkeys such that a hardware FIDO2 key is needed to decrypt them "per-use", and prevent software on the host from stealing a pile of passkeys that give direct access to accounts without further MFA.
Right now it feels like passkeys in the password manager is akin to storing MFA seeds and recovery keys in the same password manager...
I wrote a quick PoC using certificates to encrypt a password, with the cert private key 'stored' in the TPM, with a PIN. This is pretty easy on Windows, which exposes the TPM as a special crypto provider.
If you wanted to go a step further, you could use a smartcard with hardware PIN reader as a PKCS11 crypto device, and use that to decrypt the long lived keys in the store, then pass it back to the host encrypted by a platform-protected key to be decrypted and used.
If you could get the right implementation specifics together, you could likely then have the smart card simultaneously re-encrypt the credential with a key bound to PCR state of the TPM via a policy. You'd then decrypt that ciphertext on TPM without a PIN, but conditional on PCR state of a couple of PCRs that represent your system like the secure boot toggle state and allowed CAs.
That lets you be a bit more "cross device" than a fully TPM solution does, though your certificate technique works fine as long as you keep an offline backup for enrollment if anything changes on your system.
Perhaps this is excessive, but it's a model where I like to see layers of security that depend on different, uncorrelated failures being required to bypass them.
Today if you want to get into an account using "FIDO2 as MFA" you need both the account credentials or ability to reach the Fido prompt (say password reset), and the hardware token device (with optional pin). The device alone being compromised shouldn't get you into the account.
The moment you go "passkey" and have to use a system like the one you suggest, you need to trust software based storage of long term credentials.
That isn't the case with a hardware FIDO2/U2F token, which has unlimited capacity for non-resident MFA keys the server holds for you to decrypt and use locally to sign login attempts.
I liked that FIDO seemed to get towards hardware backed security modules for login, without cognitive load of worrying about number of sites and yubikey slot capacity. Resident Webauthn keys limit the number of sites you can have, and push you towards software based solutions (so you lose out on doing the crypto on the single purpose, limited platform that's dedicated to generating those signatures).
However, I don't know whether it's possible to delete only a single resident key you no longer need.
This adds another step needing considered for a user, as finite storage means a whole edge case to consider (can't register as slots full), and no simple actionable step to take ("which account would you like to never be able to log into again?" or "sorry you need to wipe this key and lose everything, or buy another one")
I feel there is a usability aspect of FIDO2 (for non-resident MFA) that is being overlooked - the paradigm was simple - a physical key you don't lose, and you can have multiple keys. The gotcha was no way to replicate backup keys, which becomes fairly difficult for users. But hey - passkeys launched with no export or migration process between closed device ecosystems!
From my perspective though, I won't use passkeys until I get sufficient control over them to be allowed to decide if I want to make them "resident" or not. (I don't want resident keys!!)
I want to use non-resident keys everywhere as a hardware-backed second factor that is phishing resistant, without capacity limitations (so zero cognitive burden on whether to use or not).
It feels like a regression for passkeys to be forgetting about what (for me at least) was the core basic use-case of FIDO2 - as a highly secure second factor for someone who already can manage storage of secrets in software, and just wants high assurance phishing resistant MFA during their conventional login process.
You can, it’s part of CTAP2 and various apps like Yubico Authenticator are available to do it.
It’s not user-friendly, but it is possible.
Once the technology is there to support it, hopefully the user experience part can be improved with time.
Ref in the standard - https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-cl...
I haven't really looked into it myself, but it seems to be using the same database format as KeePass, and it hooks into macOS's "FIDO provider" API, which makes it accessible to not only Safari but all browsers that use it (which includes Firefox and Chrome on macOS, and probably everything on iOS), without requiring any browser-side extension.
Ideally, some of this could actually be solved by having a government organization that provides this and is regularly updated / audited etc. but in the US at least, we are not in any place for that to happen, so you need to pick a provider.
Apple is reasonably good at this, if you're in their ecosystem. Can't speak for Google. 1Password has been very good to me as well, and there are Yubikeys too.
Nothing is perfect, but this is a far, far far better state than were it was heading before WebAuthN
If by "something" you mean an internet syncing service, then no, I don't.
I do trust my own backup methodology.
Personally I’m a big 1Password fan and have been in the Apple ecosystem for a very long time as well.
Most security folks I trust also vouch for them, as far as practices and effectiveness goes of their software.
But you’ll need to trust something somewhere, and you might even need to expose it to a network in some cases.
The one thing I really like about Yubikey is it doesn’t require a network connection at all to work, but it never caught on generally for that model to be widespread supported so I have found while I do use my Yubikey a fair amount there are still things that don’t accept it that I wish did
Again, no, I don't, and you still haven't explained why. Unless you mean that the big tech companies will force me to use a sync service whether I want it or not.
Moreover, you've ignored my point about paid subscriptions.
They do? I don't see how, since non-discoverable WebAuthN credentials make phishing just as impossible.
The only thing discoverable credentials allow on top of non-discoverable ones is avoiding having the user type in their username or email address.
And sure, I understand that most people need the paternalistic form, whey they are not given any guns and are also unable to export their keys from some service.
For example, with TOTP, the key is given to the user in the QR code, but common authenticator apps are unable to export the same data after it was imported. But not all; and the only bad thing about this is that the export restriction is a surprise to those who didn't expect it.
If I were more conspiracy minded, I would suspect some sort of agent provocateur ruining our standards. However, I am unable to come up with a profit motive, so my only conclusion is incompetence.
SSH keys (and any other keypair shared across services) are a non-starter on the web for privacy reasons. (See also: `ssh whoami.filippo.io`.)
Because webauthn is such a nonstarter I am actually going to try and half-ass it using SubtleCrypto.sign() and friends. sort of mimic the webauthn api. This is really just a weekend project, nothing important. but I feel really stupid every time I work on it, mainly because of how ridiculous it is to have your key infrastructure managed by the service you are logging into.
However due to domain sandboxing I have half convinced myself it is as secure as using a cookie to auth the person, perhaps even a little better because I never have to see a secret. then fall into despair again on how stupid this whole endeavor is, because I could see the keys anytime I want to. (sighs, shakes fist at the sky) why could you have not made webauthn usable?
All the problems I have with it as a user originate from either reyling parties doing dumb/user-hostile things (enforcing resident keys even though I'm perfectly capable of remembering my email address or my username, improperly layering WebAuthN with existing second factors etc).
These are possible because WebAuthN is trying to provide for many use cases at once, but I've never felt like it was missing something, and user-friendly behavior is definitely possible. I've seen many examples at this point.
Really, I don't want to reimplement webauthn, I will will probably be sticking with basic auth as it just works. However, I was hoping to finally get decent public key auth. and webauthn is close, really close, but it is like the designers gave up at the last second and said "no, we don't want this to work in the general case", all it would have took was to say software token are an ok fallback. I was so close that out of frustration I spent a weekend with an experiment to make public key auth work for everybody. It works, but is a bit pointless as then I, the service needing to authorize somebody, is the same person providing them their public key management system. I might as well cut out the all the ridiculous bit twiddling and just use cookies for all the security that grants the end user.
That's unfortunately not how it works. TLS sits at the transport layer, so it's not possible for a website to use these certificates for a "login-like flow". The site doesn't get to present to the user why and to whom they are authenticating, since transport layer authentication has to happen before HTTP even gets a single request in.
There is also no "logout" button. It shares these UX problem with HTTP "basic authentication" (even though that's technically an application layer protocol).
On top of that, TLS is these days often terminated by a load balance or even a completely separate entity like Cloudflare. Not sure if you can configure these to request client certificates at all; even if you can, it makes things pretty awkward if you want to have closer control of the authentication flow.
> Privacy should be fine with TLS 1.3
It's not fine at all. Any HTTP server can request your client certificate, and most users would probably not think twice before clicking "authenticate", which then reveals their long-time stable certificate and public key to a potentially malicious server.
Compare that with WebAuthN, which makes it intentionally impossible to accidentally present the certificate for a.com at b.com.
Sure you do. When your local tax office / hospital / large data holder of your personal data has an administrative interface that only uses passwords, then the administrator gets fished and your identity stolen, suddenly you care a great deal.
Phishing is a technology issue, not a user issue.
You simply cannot know what mindset youll be in when you get phished :)
Edit: To clarify i was itching to work because it helps distract me from the reality that someone so dear to me was gone forever. I didnt want to cancel leave though because my output would have been absolutely turdy
This is even more pronounced thanks to the efforts to roll out passkeys to the masses. Most of them don't understand what they're getting into and are most likely gonna get themselves locked out quite quickly, which may mean recovery flows need to actually become more relaxed than they currently are.
Sorry, why would you ever disclose your private key to some online forum? I can't see a situation where it makes sense.
Clearly there's something I'm not thinking of, so I'm genuinely curious
Even with an online 'form' (presumably a phishing page) I don't understand why anyone would ever upload private keys for their wallets.
In the case of exchanges, users typically don't get access to the private key for the wallets anyway, so pretending to be an exchange to phish for something the victim can't even provide wouldn't make sense.
In the case of a local wallet, the whole purpose is personal ownership of the coins—which obviously becomes moot when sending the private key to some random person—so I don't see why a user would upload them in this circumstance either.
Though yes, the situation is certainly more understandable than GP posting private keys to an online 'forum' ;)
There's overwhelming empirical and anecdata evidence people make mistakes and fall for phishing. If that doesn't change your mind that much, it's not obvious what reasonably could.
The operational competence of the dba is the same in both cases.
I still see both of those regularly.
Anything on top of that is just fluff. The database can still be compromised directly.
Heck, here's one: "kty": "EC", "alg": "ECDSA_w_SHA256", "crv": "P-256", "x": "xYkEVgMClD28hXHn5JQjrgjRX3crmr0OhGiWKsLvxUY=", "y": "5lZZGFF6VrVubIHfRhGbvQBGpw6LcbP3/ZBVk7PqH0Y="
It's what you'd get if you somehow got into the db and decrypted it.
Now, do feel free to give me your nice symmetric secret password, since it's the same, yeah?
From the standpoint of a non-technical user, they're not any different in nature; a password and a private key are both just strings that give full access to your account. One is easier to remember (password), one is harder to remember (key). The one that is harder to remember usually ends up in a Google doc, iCloud, or saved as a text file.
Not all fluff is equal.
Most implementations throw about 20 "recovery codes" at you and at the absolute fucking worst possible moment while the user is trying to do something urgent, they say "save these in a secure place right now".
It's not 1, but 20 passwords that ALL give access to your account. Where do you think those codes go?
They are not only phishable, but they usually end up in a Google doc, screenshotted and pasted to Notion, or some other insecure place.
Congratulations on being able to maintain good password habits yourself. It sounds like you already know how vanishingly rare that is.
You could accomplish something fairly similar to this for SSH keys actually (an encrypted sync layer + deriving master keys from a local trusted source), but I don't think anyone has done this to the same level of polish.
I infer that you use a password manager.
Consider passkeys as a standardized interface for password managers.
I have not followed WebAuthn spec for a while but I vaguely remember that the spec discouraged software-only authenticators.
Which made me feel like WebAuthn is yet another attempt to take the power from users and even states and concentrate it in the hands of a few multinationals controlled by US government.
Pretty much what happened to Certificate Authorities and the push to use HTTPS everywhere.
Of course there are benefits to HTTPS Everywhere and to passwordless authentication.
But they do not outweight concerns over the digital autonomy of my country.
I believe you can enforce attestation using MDM and a custom app as far as I'm aware. But for general people using Safari or whatever, you can't request any kind of device attestation.
Given the convenience factor (no extra install), I imagine that device/platform passkeys to be the most popular, but there should be no problem with using alternatives.
You can make your own backups of passkeys from your password manager.
I believe you are confusing ones that are stored in software or syncable (Passkeys) with the hardware backed credentials (platform authenticators).
The problem is that the protocol allows websites to require use of the latter. If the two were indistinguishable to the website, then passkeys would be a good thing.
The amount of people in this thread who don't understand the very basics of this stuff is a pretty good counterexample to the common trope they somehow "know better." I kind of doubt that when many people here can't even get the basics straight.
https://fidoalliance.org/specifications-credential-exchange-...
I am not migrating to something just because it's new.
My solution works for me. If it doesn't anymore, I'll see what I'll do.
Depending on the service I'll swallow the bitter pill and use another solution or simply not use the service anymore.
Every and any decision has consequences, on any side.
Fight me. Grandmas and Joes end up putting these recovery codes in a Google doc because they don't know what the hell else to do with them. That is NOT more secure.
Hell, even try explaining the difference between the "secret key" and the "password" in 1password to a non-technical person. It's impossible.
"More" implies a comparison to something.
Given that Grandmas+Joes were using "passw0rd!" for all their passwords, I would argue their GDoc of backup codes is considerably more secure.