Rootless podman cannot bind ports <1024, only root can by default (on pretty much any distro I guess). Have you done something like sysctl net.ipv4.ip_unprivileged_port_start=80 to allow non-root processes to bind to port numbers >=80?
- 0 Posts
- 6 Comments
Joined 1 year ago
Cake day: June 25th, 2024
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
skilltheamps@feddit.orgto
Selfhosted@lemmy.world•Immich 2.1 Released with Better Slideshow Shuffle, New NotificationsEnglish
4·14 days agoThat’s what I thought, but last time I looked I only saw a “release” tag, no “v2” tag. Did I miss something?
skilltheamps@feddit.orgto
Selfhosted@lemmy.world•Immich 2.1 Released with Better Slideshow Shuffle, New NotificationsEnglish
3·14 days agoThat server is also a homeserver I manage for family (in another city). The two homeservers then mutually back up each other.
skilltheamps@feddit.orgto
Selfhosted@lemmy.world•Immich 2.1 Released with Better Slideshow Shuffle, New NotificationsEnglish
2·14 days agoThey show images from the same day in past years. So if your library has no images >= 1 year old I’m not sure if anything shows up.
skilltheamps@feddit.orgto
Selfhosted@lemmy.world•Immich 2.1 Released with Better Slideshow Shuffle, New NotificationsEnglish
8·14 days agoThe same way as all other services: all relevant data (compose.yml and all volume mounts) are in a btrfs subvolume. Every night a snapshot gets made and mirrored to a remote server by btrbk.
You need to ask yourself what properties you want in your storage, then you can judge which solution fits. For me it is:
The amount of data I’m handling fits on larger harddrives (so I don’t need pools), but I don’t want to waste storage space. And my homeserver is not my learn and break stuff environment anymore, but rather just needs to work.
I went with btrfs raid 1, every service is in its own subvolume. The containers are precisely referenced by their digest-hashes, which gets snapshotted together with all persistent data. So every snapshot holds exactly the amount of data that is required to do a seamless rollback. Snapper maintains a timeline of snapshots for every service. Updating is semi-automated where it does snapshot -> update digest hash from container tags -> pull new images -> restart service. Nightly offsite backups happen with btrbk, which mirrors snapshots in an incremental fashion on another offsite server with btrfs.