Too many Login entries in the redis db
# support
s
I am self-hosting medplum and noticed that it creates a Login record on redis server everytime there is a login from a client. Over a few days, this can end up with thousands of login keys. Is this expected? Are they not supposed to expire? Should we delete them manually?
r
Redis has evictions, and if it gets full will start evicting old logins, etc.
r
That's right. I believe you can tune your redis instance to choose the memory limit for eviction (https://redis.io/docs/reference/eviction/) Medplum uses volatile-lru for its eviction policy (https://docs.redis.com/latest/rs/databases/memory-performance/eviction-policy/)
n
>Medplum uses volatile-lru for its eviction policy On server start, bullmq throws a [warning](https://github.com/taskforcesh/bullmq/blob/9307fef3e5f9e5f929fd8755078913a33c2dc783/src/classes/redis-connection.ts#L352)
IMPORTANT! Eviction policy is volatile-lru. It should be "noeviction"
. Running against GCP managed redis but looks like its configured similarly to the cdk script. Also looks similar to the terraform/gcp version in the open PR (except that one's configured to [volatile-ttl](https://github.com/medplum/medplum/pull/5390/files#diff-a58ec1195c15117097420636a74a91d74cb589ad621b4afa72937b8a322881e0R15) 🤔 ). Think this warning is benign? Happen to have any tips on suppressing if so?
r
@nathan-watkins unfortunately this is a known issue wiht bullmq, and there's a ticket filed against them. You're right that this is benign, but the fix would have to happen upstream
n
Thank you! Yeah saw this [issue](https://github.com/taskforcesh/bullmq/issues/2737). I'm not sure there's a "fix" short of standing up a second redis instance just for bullmq with
noeviction
or possibly updating the global
console.warn
to intercept it, which seems like overkill