Chat I think I fixed it.
Timeline
Post
Remote status
@phnt @rez @vekkq never had any explicit errors, which makes me think there wasn't any corruption or anything. pg_amcheck didn't give any complaints either.
I changed my retention days back to 90 to try to keep a lid on the overall DB size (too cheap to pay for more disk...) but getting interrupted put it in a bad state I guess. We'll find out next Monday
I changed my retention days back to 90 to try to keep a lid on the overall DB size (too cheap to pay for more disk...) but getting interrupted put it in a bad state I guess. We'll find out next Monday
Replies
5
@prettygood @rez @vekkq Database timeouts probably caused a desync between what was thought to be federated and what actually made it into the DB. Restart always fixes that. The prune task is just a bunch of deletes that shouldn't technically break anything, but put a lot of stress on the database, which is why I always say to only run it with Pleroma stopped unless the disks can handle it (they never can on a VPS). VACUUM FULL should be done unconditionally with Pleroma stopped, unless when using pg_repack and disks can handle it (they again never can on a VPS).
@prettygood @rez @vekkq
>which yes I understand is merely just deletes but do you have handy a query that can replace it directly
No, and I don't see a reason for that really. The queries are partially burried here in a series of Ecto pipes: https://git.pleroma.social/pleroma/pleroma/src/branch/develop/lib/mix/tasks/pleroma/database.ex#L115
>which yes I understand is merely just deletes but do you have handy a query that can replace it directly
No, and I don't see a reason for that really. The queries are partially burried here in a series of Ecto pipes: https://git.pleroma.social/pleroma/pleroma/src/branch/develop/lib/mix/tasks/pleroma/database.ex#L115
@phnt @rez @vekkq >I don't see a reason for that.
Unless I'm completely mistaken, pleroma has to be running to use the prune_objects directive, no? So given the I/O limitations of a VPS, is it better to run "pleroma_ctl database prune_objects --keep-non-public --prune-orphaned-activities --keep-threads" while pleroma is running, or shut it down and do manual delete queries?
Unless I'm completely mistaken, pleroma has to be running to use the prune_objects directive, no? So given the I/O limitations of a VPS, is it better to run "pleroma_ctl database prune_objects --keep-non-public --prune-orphaned-activities --keep-threads" while pleroma is running, or shut it down and do manual delete queries?
@prettygood @rez @vekkq It is running in a half-turned off state. For example no incoming federation should occur as it is specifically configured to not serve any endpoints: https://git.pleroma.social/pleroma/pleroma/src/branch/develop/lib/mix/pleroma.ex#L26
Shut it down normally as a service and then run the task, which will turn Pleroma on only partially.
Shut it down normally as a service and then run the task, which will turn Pleroma on only partially.