Обсуждение: [PATCH] Add CHECK_FOR_INTERRUPTS in scram_SaltedPassword loop.
Hi, hackers When the scram_iterations value is set too large, the backend would hang for a long time. And we can't use Ctrl+C to cancel this query, cause the loop don't process signal interrupts. Add CHECK_FOR_INTERRUPTS within the loop of scram_SaltedPassword to handle any signals received during this period may be a good choice. I wrote a patch to solve this problem. What's your suggestions? Dears Bowen Shi
Вложения
Hi, > When the scram_iterations value is set too large, the backend would hang for > a long time. And we can't use Ctrl+C to cancel this query, cause the loop don't > process signal interrupts. > > Add CHECK_FOR_INTERRUPTS within the loop of scram_SaltedPassword > to handle any signals received during this period may be a good choice. > > I wrote a patch to solve this problem. What's your suggestions? Thanks for the patch. It sort of makes sense. I wonder though if we should limit the maximum number of iterations instead. If somebody specified 1_000_000+ iteration this could also indicate a user error. If we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief comment would be appropriate. -- Best regards, Aleksander Alekseev
> On 22 Nov 2023, at 14:30, Aleksander Alekseev <aleksander@timescale.com> wrote: > > Hi, > >> When the scram_iterations value is set too large, the backend would hang for >> a long time. And we can't use Ctrl+C to cancel this query, cause the loop don't >> process signal interrupts. >> >> Add CHECK_FOR_INTERRUPTS within the loop of scram_SaltedPassword >> to handle any signals received during this period may be a good choice. >> >> I wrote a patch to solve this problem. What's your suggestions? > > Thanks for the patch. > > It sort of makes sense. I wonder though if we should limit the maximum > number of iterations instead. If somebody specified 1_000_000+ > iteration this could also indicate a user error. I don't think it would be useful to limit this at an arbitrary point, iteration count can be set per password and if someone want a specific password to be super-hard to brute force then why should we limit that? > If we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief > comment would be appropriate. Agreed, it would be helpful. -- Daniel Gustafsson
Daniel Gustafsson <daniel@yesql.se> writes: >> On 22 Nov 2023, at 14:30, Aleksander Alekseev <aleksander@timescale.com> wrote: >> It sort of makes sense. I wonder though if we should limit the maximum >> number of iterations instead. If somebody specified 1_000_000+ >> iteration this could also indicate a user error. > I don't think it would be useful to limit this at an arbitrary point, iteration > count can be set per password and if someone want a specific password to be > super-hard to brute force then why should we limit that? Maybe because it could be used to construct a DOS scenario? In particular, since CHECK_FOR_INTERRUPTS doesn't work on the frontend side, a situation like this wouldn't be interruptible there. I agree with Aleksander that such cases are much more likely to indicate user error than anything else. regards, tom lane
> I don't think it would be useful to limit this at an arbitrary point, iteration
> count can be set per password and if someone wants a specific password to be
> super-hard to brute force then why should we limit that?
> count can be set per password and if someone wants a specific password to be
> super-hard to brute force then why should we limit that?
I agree with that. Maybe some users do want a super-hard password.
RFC 7677 and RFC 5802 don't specify the maximum number of iterations.
> If we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief
> comment would be appropriate.This has been completed in patch v2 and it's ready for review.
Regards
Bowen Shi
Вложения
Hi, > > If we want to add CHECK_FOR_INTERRUPTS inside the loop I think a brief > > comment would be appropriate. > > This has been completed in patch v2 and it's ready for review. Thanks! > > I don't think it would be useful to limit this at an arbitrary point, iteration > > count can be set per password and if someone wants a specific password to be > > super-hard to brute force then why should we limit that? > I agree with that. Maybe some users do want a super-hard password. > RFC 7677 and RFC 5802 don't specify the maximum number of iterations. That's a fairly good point. However we are not obligated not to implement everything that is missing in RFC. Also in fact we already limit the number of iterations to INT_MAX. If we decide to limit this number even further the actual problem is to figure out what the new practical limit would be. Regardless of the chosen number there is a possibility of breaking backward compatibility for certain users. For this reason I believe merging the proposed patch would be the right move at this point. It doesn't make anything worse for the existing users and solves a potential problem for some of them. -- Best regards, Aleksander Alekseev
On Thu, Nov 23, 2023 at 11:19:51AM +0300, Aleksander Alekseev wrote: >>> I don't think it would be useful to limit this at an arbitrary point, iteration >>> count can be set per password and if someone wants a specific password to be >>> super-hard to brute force then why should we limit that? >> >> I agree with that. Maybe some users do want a super-hard password. >> RFC 7677 and RFC 5802 don't specify the maximum number of iterations. > > That's a fairly good point. However we are not obligated not to > implement everything that is missing in RFC. Also in fact we already > limit the number of iterations to INT_MAX. INT_MAX, as in the limit that we have for integer GUCs and the routines building the hashed entry, so the Postgres internals are what defines the limit here. I doubt that we'll see cases where somebody would want more than that, but who knows in 10/20 years. > If we decide to limit this number even further the actual problem is > to figure out what the new practical limit would be. Regardless of the > chosen number there is a possibility of breaking backward > compatibility for certain users. No idea what the limit should be if it were to be lowered down, but I suspect that even a new lower limit could be an issue for hosts in the low-end specs when it comes to DOS. It's not like there are no ways to eat CPU when you are already logged in. > For this reason I believe merging the proposed patch would be the > right move at this point. It doesn't make anything worse for the > existing users and solves a potential problem for some of them. Yeah, agreed. Being stuck on a potential large tight loops is something we tend to avoid in the backend, so I agree that this is a thing to keep in the backend especially because we have scram_iterations and that it is user-settable. I think that we should backpatch that down to v16 at least where the GUC has been introduced as it's more like a nuisance if one sets the GUC to an incorrect value, and I'd like to apply the patch this way. Any objections or comments regarding that? -- Michael
Вложения
> I think that we should backpatch that down to v16 at least where the
> GUC has been introduced as it's more like a nuisance if one sets the
> GUC to an incorrect value, and I'd like to apply the patch this way.
> GUC has been introduced as it's more like a nuisance if one sets the
> GUC to an incorrect value, and I'd like to apply the patch this way.
Agreed.
The patch has been submitted in https://commitfest.postgresql.org/46/4671/
Regards
Bowen Shi
> On 25 Nov 2023, at 02:20, Michael Paquier <michael@paquier.xyz> wrote: > > On Thu, Nov 23, 2023 at 11:19:51AM +0300, Aleksander Alekseev wrote: >>>> I don't think it would be useful to limit this at an arbitrary point, iteration >>>> count can be set per password and if someone wants a specific password to be >>>> super-hard to brute force then why should we limit that? >>> >>> I agree with that. Maybe some users do want a super-hard password. >>> RFC 7677 and RFC 5802 don't specify the maximum number of iterations. >> >> That's a fairly good point. However we are not obligated not to >> implement everything that is missing in RFC. Also in fact we already >> limit the number of iterations to INT_MAX. > > INT_MAX, as in the limit that we have for integer GUCs and the > routines building the hashed entry, so the Postgres internals are what > defines the limit here. I doubt that we'll see cases where somebody > would want more than that, but who knows in 10/20 years. > >> If we decide to limit this number even further the actual problem is >> to figure out what the new practical limit would be. Regardless of the >> chosen number there is a possibility of breaking backward >> compatibility for certain users. > > No idea what the limit should be if it were to be lowered down, but > I suspect that even a new lower limit could be an issue for hosts in > the low-end specs when it comes to DOS. It's not like there are no > ways to eat CPU when you are already logged in. The whole point of this GUC (and the iteration count construct in the spec) is to allow hardened setups to make brute forcing passwords as hard as they choose them to be, setting an upper limit (apart from the INT_MAX implementation detail) where one isn't even mentioned in the RFC makes little sense when the loop can be canceled. On the flip side, setups which have low end clients can choose to reduce it from the default to make scram an option at all where it previously was too expensive and less secure schemes had to be used. >> For this reason I believe merging the proposed patch would be the >> right move at this point. It doesn't make anything worse for the >> existing users and solves a potential problem for some of them. > > Yeah, agreed. Being stuck on a potential large tight loops is > something we tend to avoid in the backend, so I agree that this is a > thing to keep in the backend especially because we have > scram_iterations and that it is user-settable. > > I think that we should backpatch that down to v16 at least where the > GUC has been introduced as it's more like a nuisance if one sets the > GUC to an incorrect value, and I'd like to apply the patch this way. > Any objections or comments regarding that? I don't see any reason to backpatch further down than 16 given how low the hardcoded value is set there, scanning the archives I see no complaints about it either. As a reference, CREATE ROLE using 4096 iterations takes 14ms on my 10 year old laptop (1M iterations, 244x the default, takes less than a second). -- Daniel Gustafsson
On Mon, Nov 27, 2023 at 10:05:49AM +0100, Daniel Gustafsson wrote: > I don't see any reason to backpatch further down than 16 given how low the > hardcoded value is set there, scanning the archives I see no complaints about > it either. As a reference, CREATE ROLE using 4096 iterations takes 14ms on my > 10 year old laptop (1M iterations, 244x the default, takes less than a second). Agreed, so done it this way. \password has the same problem, where we could perhaps do something with a callback or something like that, or perhaps that's just not worth bothering. -- Michael