Conversation
snac2 proudly surviving miniscule loads (like all other fedi servers), but it also has zero tests.

RE: https://soc.octade.net/octade/p/1775197590.747947
1
0
2
@lain while the lack of a test suite is a valid complaint, why the dunk
1
0
0
@kirby no dunk, it’s working software.
2
0
1
@lain @kirby activitypub has zero tests, what's the point, they could pepper it with asserts and it wouldn't change anything

has pleroma having a test suite helped it federate any?
2
0
1
@lain @kirby imo they've done more harm than good, there's a dozen of commits where the tests were wrong for years, and it made no change
1
0
1
@lain @i @kirby twelve tests were wrong so we should just throw away the other 3000
1
0
0
@i @lain @kirby why should a protocol have tests?!
0
0
0
@feld @lain @kirby might as well, it has produced an over constrained and frail codebase where nothing can get fixed with any amount of actual confidence, none of the tests have helped not to repeatedly break follows or not screw up the job queue
1
0
1
@i @lain @kirby i don't know man, i think you just talk a lot of shit for the fun of it while I keep running Pleroma without issues on a shitty 4 core 4GB RAM machine with less power than a lightbulb and never run into anything that prevents me from being able to use the network
4
0
0
@feld @i @lain @kirby snac2 runs on 2gb of ram and 1 core on a vps with dogshit disk IOPS. i also dont see the "miniscule load" thing, its worked fine for me, and its not like its been tested with a poast-sized load to say it can't handle it.
1
0
1
@feld @i @kirby @lain pleroma works great on mid-sized servers and mid-sized loads. Its awful on actual constrained environments. cum.salon used to be on the cheapest frantec server, and back then we called it "single user at a time" instance because only 1 person could reliably post at a time. snac never chugged there, and snac doesn't bloat up the db (cuz it doesn't hv one, hehe. That also makes it work more nicely with zfs than postgres servers).
1
0
1
@feld @i @lain @kirby
> i don't know man, i think you just talk a lot of shit for the fun of it

Yeah he does that sometimes with basically anything software related. ActivityPub, Pleroma, PostgreSQL, filesystems, you name it. Move along and he'll calm down the next day.
1
0
1
@feld @i @kirby @lain On a related snac2 note. I sure do love processing completely untrusted input with a completely bespoke black magic json parser written in C.
0
0
1
@lain @kirby
>works
>therefore if it works no testing is needed
this is just standard practice sex
0
0
0
@pernia @i @lain @kirby


look at this fucking thing. It is worthless. But it has flash storage, 224GB SSD that cost me $30. The computer itself cost me $50 on eBay.

I can get ~500MB/s reads and 35,000 IOPS on this underpowered turd

> sudo diskinfo -ti ada0
ada0
512 # sectorsize
240057409536 # mediasize in bytes (224G)
468862128 # mediasize in sectors
0 # stripesize
0 # stripeoffset
465141 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD Green M.2 2280 240GB # Disk descr.
24111Q800334 # Disk ident.
ahcich0 # Attachment
Yes # TRIM/UNMAP support
0 # Rotation rate in RPM
Not_Zoned # Zone Mode

Seek times:
Full stroke: 250 iter in 0.051117 sec = 0.204 msec
Half stroke: 250 iter in 0.077006 sec = 0.308 msec
Quarter stroke: 500 iter in 0.121437 sec = 0.243 msec
Short forward: 400 iter in 0.100602 sec = 0.252 msec
Short backward: 400 iter in 0.093022 sec = 0.233 msec
Seq outer: 2048 iter in 0.369685 sec = 0.181 msec
Seq inner: 2048 iter in 0.454619 sec = 0.222 msec

Transfer rates:
outside: 102400 kbytes in 0.213660 sec = 479266 kbytes/sec
middle: 102400 kbytes in 0.205639 sec = 497960 kbytes/sec
inside: 102400 kbytes in 0.221528 sec = 462244 kbytes/sec

Asynchronous random reads:
sectorsize: 105132 ops in 3.003777 sec = 35000 IOPS
4 kbytes: 102913 ops in 3.003295 sec = 34267 IOPS
32 kbytes: 33816 ops in 3.011817 sec = 11228 IOPS
128 kbytes: 7671 ops in 3.050182 sec = 2515 IOPS
1024 kbytes: 1668 ops in 3.249046 sec = 513 IOPS




ya'll gotta stop trying to run Pleroma on servers that have too shitty of IOPS to run a Postgres database. That's the core problem. They get the cheapest VPS on the planet that gives you 5 IOPS per minute and then complain that you can't have more than one user without it freezing

also please stop trying to subscribe to every relay on the fediverse to archive every post that ever existed. Pleroma was not meant for that.
3
0
1
@feld @i @lain @kirby i ran pleroma well on a 2gb vps (that i cheated with using zram hehehhe), and i even ran conduit (matrix)
0
0
2
@feld @pernia @i @lain @kirby I get 35MB/s max speeds (~15x) and 1K IOPS (35x) capped. Subscribed to SPW, FSE, and Baest (when that existed) relays since almost day one. Sure repack takes like 6 hours, but this is a single user instance and I run it on one of the worst VPS disks I've seen (speed wise), only dethroned by BuyVM's slab storage, now kinda on purpose. Also running PostgreSQL 13 probably leaving lot of performance on the table. Yet, it works. 3 years, 40GB, DB maintained probably twice a year with minimal work done. PostgreSQL can run on absolute shit VPS hardware, if you know how to optimize it, but not everyone does.

The configurations in the attached images are a joke though. 2GB of RAM and 2vCPUs should be the minimal requirement listed now, and only for single user instance. 1GB of RAM and 1vCPU is probably undoable after a few months of uptime.
image.png
1
0
1
Yuck a wyse thin client. Hardware sucks and their management software is even worse.
I would use any other optiplex and rebuild it to use VDI.
0
0
1
@feld @i @kirby @lain @pernia
>I run it on one of the worst VPS disks I've seen (speed wise), only dethroned by BuyVM's slab storage, now kinda on purpose
Actually no. The OVH 4MB/s slab storage special that used to run oban.borked.technology was the worst, yet it could still handle very large Mastodon relays without issues.
1
0
1
@feld @i @lain @kirby idk how you think relatively new hardware (to say, not some 486 pc running win3.1) is ever gonna be worthless compared to a virtual machine. as gay as your computer is its gonna have more iops and run postgres better.

Its not even equivalent, frankly, because one is gonna be a homelab experience that you will have to baby and the other is gonna take tree fiddy bucks a month.

Snac2 *can* run on a shitty vps with 5 IOPS, thats the point. It beats pleroma at the constraint game. it doesnt even need an overengineered elephant db to work, or an gay lisp thats never packaged on my distro repos. All it requires is a C compiler, and if you use openbsd, you even get pledge() with it.
4
0
1
@phnt @pernia @i @lain @kirby if the writes are batched nicely i'm sure it could survive :)
0
0
0
@pernia @i @lain @kirby

> one is gonna be a homelab experience that you will have to baby and the other is gonna take tree fiddy bucks a month.

does your VPS provider login and run updates/patches for you? that's all you have to do with your homelab server. you're really exaggerating the amount of effort involved here
1
0
0
@pernia @i @lain @kirby

but also if you run OpenBSD and manage to have "hundreds of thousands of json files in a single directory", you're gonna have a not so fun time because UFS wasn't made for that

and since it's OpenBSD UFS, you get free filesystem corruption on an unclean shutdown too! toot
1
0
0
@pernia @i @lain @kirby but at $3.50/mo for the VPS, I just have to keep mine alive for 2 years and I've saved money on my much better performing server vs the rented one you propose.
1
0
0
@pernia @i @lain @kirby you know, if we stripped Pleroma down to the bare featureset that Snac has it would perform nearly as well

Everyone OK with us taking a chainsaw to Pleroma and cutting out most of the functionality?
1
0
0
@pernia @i @feld @lain @kirby Don't make me install OpenBSD and Pleroma on my Pentium D thinkstation with premium SATA1 WD 80GB HDD speeds. That space heater has to be slower than any reasonable VPS you can buy today.
2
0
3
@phnt @pernia @i @lain @kirby plz do it, make blog posts about the most unhinged stupid hardware that Pleroma can run on so we can continue pointing the finger at bad VPS providers for ripping off customers with their slow, oversubscribed shared infra
0
0
1
@feld @i @lain @kirby it wasn't made for that? Have you tested that many text files in an FFS directory?

I've seen people put it through worse. But whatever, I haven't had that workload. I haven't had a filesystem that couldn't be fsck'd after unclean shutdown either.
1
0
0
@feld @i @lain @kirby yea, its a better deal, if you intend to keep it.
0
0
0
@feld @i @lain @kirby then it wouldn't be pleroma, it'd be a shitty simulacrum of snac.

Just know your limits, snac has a clear niche pleroma can't beat. That's all.
0
0
1
@pernia @i @lain @kirby

> Have you tested that many text files in an FFS directory?

yes, most filesystems fall apart from this except ZFS. i'm gonna be lazy, here's a slop response
2
0
1
@feld @i @lain @kirby >a slop response
So i'll take that as a "No, but i wanna win this internet argument"
2
0
0
@feld @i @kirby @lain also, you said most filesystems, except zfs. So clearly the problem isn't even Openbsd's FFS. It'll break just the same on linux with ext4
1
0
0
@feld @pernia @i @lain @kirby Pleroma media sharding was implemented because of this. sjw's server and the filesystem he used (probably ext4 or some meme like btrfs; too lazy to dig the thread up) couldn't cope with it.
1
0
1
@pernia @i @lain @kirby a really good example that many sysadmins have experienced is when you have a mail server using maildir for IMAP storage and someone has a few hundred thousand files in a single mail folder and complains that their mail is slow

if you've managed email for a company within the last 25 years you've probably encountered this
0
0
0
@phnt @pernia @i @lain @kirby it's also why S3 object storage uses the directory sharding too

it's just math. limit the size of the search space and it's fast.
0
0
0
@pernia @i @feld @lain @kirby
I find running revolver ( instead of pleroma) helps a LOT when dealing with older hardware. Works great on my 20 year old Dell
1
0
1
@sampler @i @feld @lain @kirby i have a 1973 pdp-11 and it runs revolver like a charm. couldn't ask for a better more real software
0
1
1