supertokens.goFilecheck for race conditions No locking when checking email existence and creating users Concurrent signups with same email could create duplicate users
You’re right to be concerned. Your SignUp
and thirdPartySignInUp
flows do not prevent race conditions during concurrent signups with the same email. Here’s what’s happening and why it’s unsafe:
Both emailpassword.SignUp
and thirdparty.SignInUp
:
This creates a classic TOCTOU race condition (Time-of-Check to Time-of-Use).
signUpResponse, err := originalSignUp(email, password, tenantId, userContext)
// ... then check if EmailAlreadyExistsError != nil
Problem: If two concurrent requests hit here, both can proceed and emailpassword
might allow both to create.
existingUsers, err := thirdparty.GetUsersByEmail(tenantId, email)
// if len(existingUsers) == 0 → allow signup
Problem: If the same email signs up simultaneously via different providers (or even same one), both pass the check and create duplicate accounts.
Ensure your members
table (and SuperTokens DB) has a unique index on email. You likely already have this, but it needs to surface as an error.
Use a mutex or DB row-level locking (e.g., SELECT ... FOR UPDATE
) around the email check + creation. Options:
sync.Map[email]string -> sync.Mutex
) — quick but can have perf limits under high load.tx, err := client.Tx(ctx)
defer tx.Rollback()
// check + create
tx.Commit()
Wrap your originalSignUp
and originalSignInUp
calls with your own duplicate detection:
email already exists
) and map them to your own error.EmailAlreadyExistsError
— it’s not reliable under race.Add tests that simulate 5+ concurrent signup requests with the same email. Assert:
email
.Let me know if you want help wiring a mutex-based or Redis lock for this.
i’m already using entgo transactions, show me mutex use
Since you’re already using Ent’s transaction, you can use an app-level mutex map keyed by email to serialize concurrent signups for the same email. Here’s how:
var signupLocks sync.Map // map[string]*sync.Mutex
func acquireSignupLock(email string) *sync.Mutex {
muIface, _ := signupLocks.LoadOrStore(email, &sync.Mutex{})
mu := muIface.(*sync.Mutex)
mu.Lock()
return mu
}
func releaseSignupLock(email string, mu *sync.Mutex) {
mu.Unlock()
// Optional: cleanup to avoid leak
signupLocks.Delete(email)
}
In your override (inside the wrapped SignUp
or thirdPartySignInUp
), do:
lock := acquireSignupLock(email)
defer releaseSignupLock(email, lock)
// now perform the original sign-up logic
signUpResponse, err := originalSignUp(email, password, tenantId, userContext)
// ...
SignUp
panics or times out, the mutex may be leaked unless cleanup is bulletproof.Use go-redsync or similar:
import "github.com/go-redsync/redsync/v4"
// Acquire lock for `email@example.com`
mutex := redsync.NewMutex("signup:" + email)
// mutex.LockContext(ctx)
// ... signup ...
// mutex.Unlock()
This scales better and avoids per-node duplication or memory growth.
If you’re single-instance or early-stage:
lock := acquireSignupLock(email)
defer releaseSignupLock(email, lock)
// Proceed with signup
If you’re production-grade, multi-instance:
Need help swapping in a Redis-backed lock? I can wire that too.
go import sync
Add this import to your Go file:
import "sync"
You’ll need it for sync.Map
and sync.Mutex
. Place it alongside your existing imports.
supertokens.goFilelooks good now?
Your updated file does not yet include mutex locking logic. There’s no sync.Map
, sync.Mutex
, or any locking mechanism visible in supertokens.go
. The race condition is still present.
At the top:
import (
"sync"
// other imports...
)
var signupLocks sync.Map
func acquireSignupLock(email string) *sync.Mutex {
muIface, _ := signupLocks.LoadOrStore(email, &sync.Mutex{})
mu := muIface.(*sync.Mutex)
mu.Lock()
return mu
}
func releaseSignupLock(email string, mu *sync.Mutex) {
mu.Unlock()
signupLocks.Delete(email) // optional cleanup
}
Inside your (*originalImplementation.SignUp)
override:
lock := acquireSignupLock(email)
defer releaseSignupLock(email, lock)
Just before:
signUpResponse, err := originalSignUp(email, password, tenantId, userContext)
At the top of thirdPartySignInUp
:
lock := acquireSignupLock(email)
defer releaseSignupLock(email, lock)
Once you add those pieces, the signup flow will be correctly serialized per email address — preventing concurrent duplicate signups.
Would you like me to generate the exact diff for the patch?
supertokens.goFilethis is the updated file
Yes — now it looks solid. Here’s a precise breakdown:
var signupLocks sync.Map
func acquireSignupLock(email string) *sync.Mutex
func releaseSignupLock(email string, mu *sync.Mutex)
SignUp
logic with:lock := acquireSignupLock(email)
defer releaseSignupLock(email, lock)
This correctly serializes concurrent signups by email within this process.
Your custom thirdPartySignInUp(...)
does not currently acquire a lock. Race can still happen there:
func thirdPartySignInUp(...) {
// Add this:
lock := acquireSignupLock(email)
defer releaseSignupLock(email, lock)
// rest of the logic...
}
This ensures you also prevent concurrent third-party signups for the same email.
Still make sure your members
table enforces a unique index on email, to catch anything leaking past the mutex (e.g. multi-instance race).
Path | Lock Added? | Comment |
---|---|---|
Email/password SignUp | ✅ | Safe |
Third-party SignUp | ❌ | Add acquireSignupLock() |
DB-level uniqueness | 🟡 | Verify it’s enforced |
Once you patch the third-party path, you’re race-safe.
Let me know if you want to test with goroutines or simulate a burst scenario.
whats the idea behind using context like this ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel()
That pattern is used to enforce a timeout on long-running operations. Here’s what each part is doing:
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
context.WithTimeout(...)
:context.Context
that automatically cancels after 30 seconds.context.Background()
:defer cancel()
:The context would still time out after 30s, but you’d leak the timer until garbage collection. On high load, that becomes a problem.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
err := dbClient.Query().Where(...).Exec(ctx)
If the query hangs, it gets force-canceled after 5 seconds. Cleaner, safer, faster failover.
Let me know if you want a context.WithTimeout
helper for your Ent operations.