#social 2019-04-21

2019-04-21 UTC
bblfish and Guest84 joined the channel
#
dansup
I'm doing an experiment with pixelfed.social to hide like counts on the timelines. You can still them on your own posts, and if you view the post itself
bblfish and xmpp-social joined the channel
#
fr33domlover
Q: If my server receives an activity in the inbox, and it's addressed to actors and collections on other servers, how do I know to which local actors to deliver it?
#
fr33domlover
For example server A received an activity addressed to a collection that server A owns, so it does inbox forwarding, expanding the collection, discovering an actor joe@B and delivering the activity to server B
#
fr33domlover
When server B receives it, how does it know it should put it in Joe's inbox?
#
fr33domlover
(How can it tell why it got the activity?)
#
fr33domlover
cjslep[m], ^
#
fr33domlover
^_^
#
rialtate[m]
fr33domlover: in our software forwarded addresses are rewritten. Some people think this is blasphemy but I don't see how else it would be possible.
#
fr33domlover
rialtate[m], yeah I'm wondering how it's supposed to be done. Maybe one way is to have a separate inbox URI for each actor; relying on that means you can't use sharedInbox here but it does mean you can extract the target actor from the inbox URL. Hmmm rewriting addresses can work but what if you receive stuff from a server that doesn't do it like that?
#
fr33domlover
I'm wondering how this generally happens
#
fr33domlover
Like, if Mastodon sends out an activity to someone's follower's, does it list them all in the activity?
#
fr33domlover
should check
bblfish joined the channel
#
rialtate[m]
Some receivers may do some magics on the followers collection, but that only works for followers and is hacky IMHO.
#
fr33domlover
rialtate[m], cjslep[m] actually it's a bigger question: What if you are addressed in "bto" or "bcc"? those are removed before delivery so you won't see your actor ID in the addressing, but you're still supposed to receive the activity
#
fr33domlover
The only way I see there, is to have a distinct inbox URI per actor
#
fr33domlover
Much like in email
bblfish joined the channel
#
rialtate[m]
fr33domlover: true. In this case you may still accept it simply because you know you are subscribing (following) the actor that authored it.
#
rialtate[m]
Realistically, you would have to accept based on the acl of "can this author or owner/creator send me stuff? " -- except there isn't exactly owner semantics in AP. there are ways to represent it but compatibility across many different softwares probably isn't likely without special efforts.
#
fr33domlover
rialtate[m], I guess it's a bit like email: When some stranger sends you something, there's no guaranteed automatic way to tell whether it's spam or desirable content. Say, if you got it because you're in bto/bcc, checking your following list won't help
#
fr33domlover
You just need to somehow know who the actual recipient actor was, even if it isn't specified anywhere in the activity
#
fr33domlover
Unless we decide together on a way to do that ^_^ Or use the existing way: Unique inbox URI per actor
#
fr33domlover
Every other trick isn't guaranteed to work unles we all implement the same trick
#
fr33domlover
(For example, setting bt/bcc when doing inbox forwarding)
#
fr33domlover
(This shouldn't conflict with the spec, but, it will work only if everyone does it)
#
rialtate[m]
> <@irc_fr33domlover:cybre.space> rialtate[m], I guess it's a bit like email: When some stranger sends you something, there's no guaranteed automatic way to tell whether it's spam or desirable content. Say, if you got it because you're in bto/bcc, checking your following list won't help
#
rialtate[m]
It still would. If in your software following == all of their posts show up in my stream (as with most impl currently out there), then addressing doesn't matter at all. Each actor that allows it sees it.
#
rialtate[m]
That's of course more difficult to do if your software wastefully and dumbly stores a copy of every object for every recipient (bad idea anyway)
#
rialtate[m]
More difficult == more compute time
#
fr33domlover
rialtate[m], what if your server has 10,000 users following me, and I send an activity with "bcc" listing just 3 of those 10,000 people. I wish only those 3 to receive the activity. If you rely on following, you'll deliver to many people who shouldn't get the content
#
fr33domlover
rialtate[m], in AP it's possible to address people who aren't your followers
#
fr33domlover
And more generally, address just some of your followers, and some who aren't your followers, etc.
#
fr33domlover
There's no need to store a whole copy for each recipient, but, yes, right now I store 1 copy of the activity and then 1 DB row for each recipient's new inbox item
#
rialtate[m]
> <@irc_fr33domlover:cybre.space> rialtate[m], what if your server has 10,000 users following me, and I send an activity with "bcc" listing just 3 of those 10,000 people. I wish only those 3 to receive the activity. If you rely on following, you'll deliver to many people who shouldn't get the content
#
rialtate[m]
If you are not sending to a hidden 3rd party collection then there would be no translation. In the case of singular directed recipients on bcc/bto you *have* to deliver to each individual inbox with the other bto/bcc recipients stripped.
#
rialtate[m]
Well, actually no. The receiving server has to strip them. And you can still utilize shared.
#
fr33domlover
rialtate[m], spec says you strip before delivery; so you may recieve an activity addressed to 1 actor, but there were originally another 100 in bto/bcc, who are on your server and you need to deliver to them, despite their IDs not listed in the activity
#
fr33domlover
I guess that doesn't happen in most implementations? But technically, it can
#
fr33domlover
So if you receive to the shared inbox, you can't determine who these 100 actors are
#
fr33domlover
And if you receive the same activity 100 times in inidividual inboxes, then you can :P
#
rialtate[m]
> <@irc_fr33domlover:cybre.space> And if you receive the same activity 100 times in inidividual inboxes, then you can :P
#
rialtate[m]
Yeah that's lame. The capability to deliver 1000 objects to a server at once (preferrably to n different inboxes) is also sorely missing. Without it AP can't really scale above 100s of followers.
#
rialtate[m]
Or 100s of active users even :x
#
rialtate[m]
It's actually kind of funny in a way, because if it's to 1000 contacts on 1000 different servers you can quickly open up 1000 sockets and send 1000 packets down the wire and they all process parallel, but if those 1000 contacts are on one server you have to chug through and wait for the single server to before giving it more work.
#
fr33domlover
rialtate[m], depends on the server! You can process requests in parallel, I know some impls are starting to do that (mine does from the beginning, although I put some locks as temporary hacks and now replacing them with safe atomic SQL queries)
#
fr33domlover
rialtate[m], but yeah it's funny
#
fr33domlover
I wonder though whether it's a good idea to scale
#
fr33domlover
Like, have more than a few 1000s of users
bblfish, cesar[m] and timbl joined the channel
#
nightpool[m]
Uh.........
#
nightpool[m]
even the smallest mastodon impls run their receive queue on like 20 threads
#
rialtate[m]
With latency, locking, encryption, etc. 1 server to 1 server you will never get more than 200 https requests per second and that's being generous. But, 1 to n servers with the same hardware and network connection you'll get over an order of magnitude better.
#
rialtate[m]
Go ahead, bench it, you'll see. I've already done it before.
#
nightpool[m]
i'm not sure why you're assuming a single domain name is always going to resolve to one server
#
nightpool[m]
mastodon.social is a bunch of web servers
#
nightpool[m]
also, like, 200 rps is really really slow. plenty of people do hundreds of thousands of requests per second just with stock ruby on rails servers
#
rialtate[m]
Lol
#
rialtate[m]
Ruby? More like 1 per 5 sec
#
nightpool[m]
./shrug
timbl, hellekin, Guest84 and bblfish joined the channel; vitalyster left the channel