#social 2023-06-18

2023-06-18 UTC
#
Justus
that seems to be a pretty important thing to not have in a social networking protocol. On a practical level: Without any form of innate discovery mechanism you're literally making walled gardens. Granted, the walls are climbable, but they do need to be climbed first. It sort of violates a core concept imo. It's not even that it's a question of what data needs to be private/scrapable, that's still a thing the individual instance has to decide
#
Justus
but there not being a mechanism built in means that even if an instance is tailored towards creating publicly readable content there is no way for it to publish that in a standardized fashion
#
Justus
as noted above, which one of the two is doing it "right"? beehaw.org by giving a useless pseudo starting point, or mastodon.social by responding with a http 406 not acceptable?
#
Justus
it's like if e-mail didn't have an agreed upon helo.
#
Justus
for my personal usecase, I sort of accept this shortfall in protocol and just have the use discover stuff via the user nodes. So I'll just expect a working entrypoint, but it feels very wrong, and it prevents full interoperability despite it being a desirable and indeed possible goal
#
Justus
* use => user
Loqi_ joined the channel
#
trwnh
it's the Web, so... the discovery is external. like in search engines, directories, etc
#
Justus
always the integration of information across the federal borders.
#
Justus
a mechanism for a user in one part of the network to discover what is on another is not really omissible to a "decentralized social networking protocol". By the very nature of decentralization it requires such a mechanism. I mean, we are talking about fairly well understood general principles here. Discovery is a quintessential step of any networking, let alone social networking. The core of federalization in all studies I'm aware of is
#
Justus
and even when you ignore the scientific side of the topic, from pure useability the approach of "just use google" is just an unnecessary hurdle. It just offloads the responsibility to a known problematic third party, and misses the potential benefits of keeping it inside the protocol. If anything having a protocol based discovery would allow forcing webscrapers away from the full trough to the part of it that they're intended to be privy to