#dev 2024-06-12
2024-06-12 UTC
Salt_, AramZS, geoffo, timmarinin, eitilt, thepaperpilot, sp1ff, [jeremycherfas], [qubyte], chimo, sadome, bret, nertzy, [schmarty], barnaby, srijan, sebbu and thepaperpilot_ joined the channel
mahboubine joined the channel
mahboubine joined the channel
# thepaperpilot Yeah, the fediverse is actually core to my motivations behind wanting to write this spec. I'll clarify in a longer form doc once I'm at home
mahboubine joined the channel
# [snarfed] oh and https://docs.joinmastodon.org/spec/security/#ld-sign , which is actually in semi-broad usage in the fediverse already, unlike those two FEPs which afaik are mostly hypothetical
Salt joined the channel
# thepaperpilot Thanks. To get into it a bit now, my motivations are actually coming from a place of a client based (rather than server based) re-envisioning of the fediverse, where people locally sign things and send them off to as many servers as they can, rather than sending something to one specific server that they've made an account with and effectively attached their identity to. Under this new system, personal websites could also b
# aaronpk sounds like you want scuttlebutt https://scuttlebutt.nz/docs/protocol/
# thepaperpilot Actually talking about something in development called "weird"
# thepaperpilot https://blog.erlend.sh/weird-netizens
# Loqi Salt: tantek left you a message on 2017-06-06 at 1:34am UTC: added a bunch more proposed Leaders Summit sessions for your consideration - please take a look and add interested (or not) notes, or other suggestions! https://indieweb.org/2017/Leaders#Sessions
# capjamesg[d] Wow, a message from 2017!
# [snarfed] thepaperpilot cool! https://codeberg.org/fediverse/fep/src/branch/main/fep/ae97/fep-ae97.md discusses client signing in the fediverse, but activitypub's use of URLs for actor ids does tie fediverse users pretty inextricably to specific instances
# thepaperpilot Yeah, which I see as a problem. Most people can't or won't self host, so they'll end up needing to pick a server, which adds a massive friction that centralized social media doesn't have, and reintroduces a lot of the problems centralization has. The re-envisioning would let you download an app, pick a display name, and you're good to do. No instance to pick, no password to set.
# thepaperpilot Nostr is very similar for sure. Haven't heard of farvaster though
# thepaperpilot [edit] Nostr is very similar for sure. Haven't heard of farcaster though
# thepaperpilot Hmm, farcaster looks like it still has the pick a server problem
# thepaperpilot Perhaps I need to continue looking into it, but it sounded like you're still making an account with a single server, attaching your identity to it
# [tantek] [snarfed] it may be worth distinguishing "local rehosting" (which IMO implies a degree of /longevity) vs "local caching", e.g. in the context of your statement that "instance sends your post to all of your followers' instances, which rehost it" — do Masto instances really rehost remote content "forever"? Or do they merely "cache" remote content for some period of time, and expire/abandon it eventually?
# [tantek] Like I don't believe every Masto instance caches images from every remote host. That wouldn't be sustainable. Or maybe that explains various Masto instance shutdowns, when an instance exceeds sustainable local hosting setups (without costing the instance admin a bunch more money than they're willing to put out regularly for a hobby)
# thepaperpilot My understanding is the posts themselves truly are fully replicated and never expire, but images typically just remain links to the original source, or cached on each instance
# thepaperpilot Although I suspect photo and video centric platforms, like pixelfed, may also replicate the media as well
# [tantek] to me (re)hosting implies a "forever" permalink, that is, it's there until the user deletes it. whereas "cache" implies it will be deleted whenever the code running decides to for whatever reason, and is dependent on being able to re-retrieve from an external source the thing that was "cached" if it's requested in the future
gRegor joined the channel
# thepaperpilot For writing up the proposal, I was thinking about how to handle declaring what content is actually being signed. I think it would be best to have it sign the raw html, meaning if any replicator transforms the content, they'll need to somehow also provide the original html to clients so they can independently verify the signature. The signature block naturally could not be contained within the content being signed, so I was
timmarinin and geoffo joined the channel
# jimwins Even though this sort of re-hosting is key to the architecture of ActivityPub/Mastodon, it also seems to me to be a constant source of confusion for users when suddenly that re-hosted content is used in a way they want to object to, like being bridged to another network, or whatever it is that Maven is doing.
# jimwins And the temperature goes up quickly when you have a VC-backed company in there, or something that has a whiff of "AI".
# thepaperpilot [tantek]: I think it'd be reasonable to have "rehosters" occasionally re-query the original source to look for updates. Naturally the edited version would have a new signature. You could even include an element within the signed content saying it previously had <old sig> as the signature. That whole thing being signed with the same private key would verify it was an edit by the original author
# thepaperpilot [tantek]: Fully agree. That said, no one would be _required_ to implement the signature block. And you could elsewhere describe what you're okay with people doing with your content, e.g. free for all non-commercial use, except training ai models
# thepaperpilot For sure, that all sounds like a good idea
# jimwins "occasionally re-query" is kind of hiding a lot of problems, I think. Mastodon already has a thundering-herd problem with generating link previews, now every post that someone somehow re-hosts will get bombarded with update checks?
# thepaperpilot It could be push based then, similar to webmentions
# jimwins That is how ActivityPub works now, to my understanding, when you edit a post, although I'm not sure how much signing goes on. What I think is missing is a peer-to-peer way for servers to pass along updates (signed) so that the source doesn't have to track and notify everyone who might care.
# thepaperpilot Right, that's fair. Not sure I have an adequate response to that
# jimwins Many years ago, there was a rough sketch of an idea that ended up being called "feedmesh". https://trainedmonkey.com/2004/9/9/decentralized_web_site_log__update_notifications_and_content_distribution
# jimwins And then there was FeedTree. https://slashdot.org/story/06/02/20/1719241/faster-feeds-using-feedtree-peer-to-peer
# thepaperpilot Even polling the original site is something a static site can't really do on its own, so I'm not a huge fan. Perhaps the rehosted content should just say "I'm specifically responding or liking to the version shown here, made at this timestamp. Check the original link for any updated content". It means less redundancy in the most recent revision of the content, though
# [tantek] e.g. /reply-context , /quote post, /comments etc.
# jimwins Sometimes what you want to be commenting on may be tied to the particular revision. Like if you're commenting on Pew's recent report about "racial conspiracy theories" you probably want to make sure someone can tell your commentary is tied to the initial publication where they used that language instead of whatever they may end up revising it to be.
# thepaperpilot Right. In that sense, automatically updating the rehosted copy would be an anti-features. That simplifies things greatly, then
# thepaperpilot [edit] Right. In that sense, automatically updating the rehosted copy would be an anti-feature. That simplifies things greatly, then
jonnybarnes joined the channel
# [KevinMarks] Though there is the zombie site problem, where an expired domain is replaced by a scam entity which serves some version of the original site but with injected ad, phishing or crypto mining.
# [KevinMarks] C2PA has provisions for editing by first and other parties to indicate what they changed.
# thepaperpilot The new site couldn't sign the edited article, unless they somehow got access to the private key
# thepaperpilot So the content could still be verified, even if everything around it has been tampered
# capjamesg[d] I don't think C2PA is hypothetical. Instagram can detect if a photo has been generated with AI and displays a tag with it.
# capjamesg[d] I assume they are using C2PA, since one of the tools that causes it is generative fill in Adobe Photoshop.
# capjamesg[d] Oooh, fascinating.
# capjamesg[d] I'm not sure.
# capjamesg[d] > We’re building industry-leading tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.
# aaronpk here's an artist whose photo was flagged as AI incorrectly https://www.instagram.com/p/C7jjJLPo9PU/?img_index=1
# [KevinMarks] And do domain based signing
# [snarfed] ok arguably C2PA is non-deterministic in a different sense, in that they'll never get 100% adoption from all hardware and software vendors, so C2PA will only ever identify a subset of all content, original or otherwise. still though, seems like a useful building block, I am glad it existss
# [KevinMarks] Yes, it's designed to be a trust in authority framework - which does fall down if someone working for the organisation is not trustworthy
# [KevinMarks] “Can we erase our history
# [KevinMarks] Is it as easy as this
# [KevinMarks] Plausible deniability
# [KevinMarks] I swear I've never heard of it” https://youtu.be/vt0J3IEYkIg?si=iEQd4EWSWrvz8lWK
# [snarfed] it feels pretty orthogonal to me, more like the self hosting purity test fallacy. very few of us implement our own web servers from scratch, or rack our own servers in our basements - or in this case, build our own cameras or photo editing software - but we're still indieweb in both spirit and practice, even if we use hardware and software from other people, even big companies. we can use C2PA devices and software (or not!) and not
H4kor and [marksuth] joined the channel
# [schmarty] hey folks! i'm looking to improve the indieweb webring by making it self-gardening. (automatically marking sites as active/inactive when the webring links are detected on their page or not)
# [schmarty] i'm thinking about a tiered polling system, like aaronpk's watchtower https://github.com/aaronpk/Watchtower/blob/c54b7430c65d95c2fd75397e2a412ad7b566b132/jobs/CheckFeed.php
# [schmarty] aaronpk: lol nah. the gardening method is someone complains aloud that there are a lot of dead sites and i go run the gardener on every site. 😅
# [schmarty] i think my tiers will be something like 1 day, 3 days, 7 days, 14 days, 30 days, plus some jitter to spread out in the day things run
# aaronpk the tiered thing works pretty well but most sites end up falling into the first or last tier https://media.aaronpk.com/2024/06/12141332-9319.png
# [schmarty] aaronpk++ that's good to know!
# [schmarty] since the web ring links check is just binary (i found links or no), my goal is to actually have everything trend to the lowest (checked least often) tier. if it ever finds a different result than last time, i'll have it bump to the highest (checked soonest) tier.
# capjamesg[d] aaronpk What dashboard is that?
barnaby joined the channel
# capjamesg[d] Do you have a personal dashboard like that for all your projects?
timmarinin joined the channel
gRegor joined the channel
gRegorLove_, amyiscoolz and wagle joined the channel