#[fluffy]I’m updating Publ’s TicketAuth issuer implementation based on the conversation we just had in #indieweb-chat (oops). Since Publ is just granting a generic access token for perusing the site I’m just setting `iss` to be the site root (just like the `resource`), and I’m leaning on folks still being able to parse the `ticket_endpoint` rel from the page itself because adding `indieauth-metadata` is kind of hairy for a bunch of reasons.
#GWGSo, Tickets being redeemed can decide on giving a token with a scope or with resource theoretically
#GWG[fluffy]: My code already looks for a fallback if it can't find the metadata endpoint.
#[fluffy]and yeah Publ doesn’t really care about scopes right now, it’s just looking at the identity for being able to read private resources (although future things will likely use scopes, such as if I ever get around to implementing a micropub implementation, but that also feels out-of-scope for TicketAuth)
#[fluffy]I’d hope that most things would fall back to looking at the rels on the page itself since the metadata rel is relatively recently added too
#GWGThe idea is that the ticket extension is, like IndieAuth itself... flexible enough to work for all these use cases
#gRegorThe spec still includes a note to fallback to rel=authorization_endpint and rel=token_endpoint, but the note itself might be removed in future spec versions.
#[fluffy]oh I guess Publ also needs to parse indieauth-metadata to get the token_endpoint, huh
#[fluffy]it’s relatively new compared to the existence of a buttload of indieauth implementations, and also it’s a complicating factor that I still haven’t thought about in terms of how Publ itself doesn’t want to be a central implementation for everything
#[fluffy]although honestly that probably doesn’t belong in Authl, which only cares about `authorization_endpoint`
#gRegorAdded rel=ticket_endpoint and dev.beesbuzz.biz sent me a token successfully, nice! I don't redeem tickets yet, but baby steps.
#GWG[fluffy]: I just updated my code(awaiting review) to use the new discovery flow and fallback on the old. I'd say that, even if you don't support anything else...
#[fluffy]Right now the default ticket lifetime is only 60 seconds but I’m going to change that to 600
#gRegorGWG, so were you saying in chat that you check `iss` first, and if you don't find the endpoints, you check the `resource`?
#[fluffy]Oh wait now I remember why authl does the full indieauth suite of endpoint discovery and yeah I’m gonna keep doing it that way. I just need to fix authl. What else lives in the new endpoint thing? Just indieauth and ticketauth? Do micropub, microsub, webmention, etc still live on the resource itself?
#GWGgRegor: Correct, because we added iss as a way of addressing that resource doesn't always have a header because it might be a hidden resource
#GWG[fluffy]: Yes, you can't discover the Microsub and Micropub endpoints from the metadata endpoint
#gRegorGWG, interesting. My current thinking is if `iss` has a metadata endpoint and the `issuer` in it is validated like the main spec describes, then if there's no ticket_endpoint in the metadata, it would error out.
#GWGgRegor: Good point, I never coupled the two, probably should
#gRegorI suppose I could add "check for rel=ticket_endpoint" before the error out step
#gRegorBut I probably would not want to go beyond that to then check `resource`, repeating all those steps
#GWGgRegor: I'll have to refactor it to be more effective with it
#gRegorI'll have to think about it more. I wonder if it's a positive that if a site advertises indieauth-metadata and doesn't include an endpoint you're looking for, you should halt there instead of falling back to rel= endpoints
#gRegorAnd obviously if a site doesn't advertise indieauth-metadata, you fallback to rels like previously
#GWGI think to encourage adoption, I would say it makes sense
#gRegorThat's kind of how I'm leaning. Makes sure your indieauth-metadata is set up correctly
#gRegorstepping away for a bit. excited to see this advancing though!
Loqi joined the channel
#GWG[fluffy]: I can document it in Ticketing, but for regular IndieAuth... I think we have only the implementation note
#[fluffy]okay question about indieauth-metadata: what is the client meant to do with the `issuer` that comes from the JSON blob?
#[fluffy]like does that override the URL as the identifier?
#[fluffy]e.g. does that now need to be what’s checked against the `me` in the response? or can an IndieAuth login flow just completely ignore that and treat the JSON’s `authorization_endpoint` exactly the same as the older spec?
#GWG[fluffy]: Right now the issuer identifier is verified in the authorization flow.
#GWGIn theory there should be some verification, but being as you found the token endpoint via the issuer identifier URL...it seems verified enough
#GWGDo you think there is an impersonation attack risk somewhere in that flow?
#[fluffy]no, I’m just wondering what the intention of the `issuer` is I guess
#[fluffy]Also has the endpoint verification thing changed since the last time I was participating in this? where the final canonical identity URL needs to have the same `authorization_endpoint` to be considered valid
#[fluffy](which was the thing added in order to make it possible to safely support multiple indieauth endpoints on a single domain e.g. http://tilde.club)
#GWG[fluffy]: In a normal authorization flow or in the ticket flow?
#[fluffy]like, pretend I haven’t looked at any of this stuff since before February 2022 and I want to update my implementation to be correct with the current specification.
#GWGThe authorization response now SHOULD return the iss parameter, and the client MUST compare it to the one provided in the metadata endpoint during discovery and if it doesn't, fail.
[tw2113], [tantek], [jacky], IWSlackGateway, [jamietanna], [aciccarello]2 and [fluffy] joined the channel
#[fluffy]it looks like this is just a thing that gets provided in the authorization response and has to match?
#[fluffy]Authl currently only uses IndieAuth to verify identity, and not to do any actual API call stuff. Token stuff will become relevant if I ever get around to writing that social reader I keep saying I’m gonna make.
#[fluffy]but at this point in my life I’d much rather just use someone else’s reader, if anyone ever makes one that supports TicketAuth and has a means of reading everything oldest-to-newest in a single stream of content.
#[fluffy]the last time I checked out Microsub readers (which, to be fair, was *ages* ago), nobody seemed to support that or understand why I’d want that 🙂
#[fluffy](the “why” being that I read a lot of serialized content)
#GWG[fluffy]: We haven't iterated much on Microsub either of late
#GWGI started again on Ticketing because capjamesg mentioned it as part of the new W3C Social effort and I thought it was something we could get outside interest on possibly as well
#[fluffy]yeah lack of widespread ticketing support continues to be something that keeps people posting on silos, especially Facebook since that’s the only thing in widespread use that even has fine-grained access control
#GWGSo, being able to show a simple implementation that can be hooked into a variety of different use cases would be great.
#[tantek]Or posting to group texts or one-off email threads with specific to: lists
#[tantek]^ that's the usecase to beat. Not FB IMO. FB use for anything "private" is imploding in slow motion
#GWG[tantek]: You have a lot of experience at moving things forward, what do you think needs to happen? Other than what is...trying to get a few new implementations going to iterate?
#[tantek]GWG, gotta get back to documenting crisp listings of the use-cases and user flows. Without that you won't know when to say no to features that don't advance a specific use-case
#[tantek]Then you need 2+ people interested in the same use-case to implement the minimum necessary protocol(s) for that use case
#[tantek]^ also this is a good example of why protocol design requires good writing skills, and is helped by good diagrams
#GWGDiagrams are not something I'm good at. Writing skills I can handle.
#gRegorThe metadata's `issuer` must also be https and a prefix of the metadata URL, so clients doing discovery should verify that and error out if it doesn't.
#Loqiaudience is an experimental property of a post that indicates the intended recipients (readers) of the post https://indieweb.org/audience
#gRegorHm, I thought there were old examples somewhere on the wiki of kylewm (I think?) creating posts limited to a couple domains, had to sign in with IndieAuth to view them. Don't see it on /audience or /private_posts tho
#[fluffy]I also have a notion of having, like, multiple channels with different policies for sorting/expiration/etc. so that like, social feeds don’t overwhelm newsy stuff
#[jacky]esp for "firehoses" or even timed events associated to a hashtag\
#[fluffy]and also some sort of filtering system that can suss out which things I’m more likely to actually read vs. lower-priority stuff I’d be likely to skip, with maybe some simple Bayesian classification or something
#[fluffy]Google Reader had a really nice “stream of content” view where it was just, hey here’s a bunch of stuff, read/skim/etc. in order and you eventually are Done
#[jacky]yeah this is def one of those spaces where we'd have to futz with an interface
#[fluffy]I’m pretty happy with FeedOnFeeds except that it’s old PHP and lacks any of the social stuff, and adding TicketAuth would be janky because it’s built to share subscriptions between all users by default in ways that make it hard to add any sort of identity sandboxing
#[jacky]yeah I was trying to find a offline reference to ticketing (and happened to be by a vending machine tbh, lol)
#[jacky]you put a ticket and get a snickers bar (or private note)
#LoqiIt looks like we don't have a page for "FeedsOnFeeds" yet. Would you like to create it? (Or just say "FeedsOnFeeds is ____", a sentence describing the term)
#[fluffy]I’m a bit surprised (and somewhat annoyed) that the original author continues to run the http://feedonfeeds.com website but has no interest in updating the page to point it to any of the modern forks
#[fluffy]the latest version on the “official” site doesn’t even run on php5
#[fluffy]A few of us have reached out to him and he’s just completely unresponsive. I’m guessing he’s just got a bunch of stuff on autorenew and he hasn’t thought about it in ages
#[snarfed]yeah that's the kicker. FB/IG's closed down APIs mean the only paths to this kind of thing are scraping, which is unsustainable for developers and risky for users, or making each user register their own API key, which is awful inaccessible UX
[jacky], [aciccarello], roxwize and thekifake joined the channel
#[tantek]yup, all true. feel free to add all the caveats to the page / list-item on this
#[tantek]same can be said I think about POSSEing to Twitter these days
#[tantek]as in I no longer want to "create a Twitter app"
#barnabyyeah, my application for a new twitter app for POSSE in early post-musk twitter was rejected. unless you’re willing to put the effort in to create a convincing application (or things have changed since then) it might simply not be possible
#barnabynow it’s not so much “don’t want to create a twitter app” as “don’t want to use twitter anymore” for me
#[tantek]I hear it's got fewer ads these days at least! 😉
#barnabyoh yes, it was very amusing to watch. afaik they kept on trying to change things in tweetdeck, trying to change various API endpoints, and immediately having to roll it all back because it had unintended consequences for other services
#barnabybut they did eventually turn off old tweetdeck and migrate to new tweetdeck, which requires a paid subscription to use (and from what I’ve heard is much worse than old tweetdeck)
#barnabyfor about a month it was hit-or-miss on whether tweetdeck was going to work on any given day. it would go away and then come back seemingly at random
#barnabyI think the (already understaffed) tweetdeck team were some of the earliest people to leave/be fired, too
thekifake and gxt joined the channel
#[snarfed]anyone have any recommendations for a Python meta tags lib? for extracting and normalizing OGP, Twitter cards, HTML meta, etc.
#[snarfed]fwiw I see lots of conneg-- interop issues, but I don't think I've ever seen anything else choke on those two ^ response content types. would be surprising
#[tantek][snarfed] re: "meta tags lib? for extracting and normalizing OGP, Twitter cards, HTML meta", WDYT of metaformats as a specified way of doing that?
#Loqimetaformats started as an April Fools joke concept to describe how to both publish using microformats class names and openly parse meta tags as a fallback for what should be in-the-body visible data, including backcompat with OGP, Twitter Cards, and meta author, description, and anything else real sites (like search engines) appear to consume https://indieweb.org/metaformats
#[tantek]if the Python microformats parser adds metaformats support, you can "just" use that instead of needing a separate "Python meta tags lib"
#[snarfed]sure! that's a big if, but if that, then absolutely
#[tantek]that's that path I'd prefer to see rather than everyone having to figure out their own precedence for OGP or Twitter Cards etc.
#[manton][snarfed] You’re right, seems very safe to change to activity+json.
#[manton]Mini rant: I don’t love that the spec seems to encourage `application/ld+json; profile=` instead of just `application/activity+json`. One is verbose and one is simple.
#[aciccarello][snarfed] That definitely sounds like what metaformats is trying to accomplish. I'll try to write up a post on how metaformats works in the nodejs parser before San Diego but the code is here if someone wants to take a look.
#[aciccarello]Metaformats probably needs some refining though
#Loqi[preview] [aciccarello] #229 feat(Experimental): add support for metaformats
#[aciccarello]I didn't include dublin core or json-ld support. I'd love to look at stats of usage.
#[tantek]dublin core doesn't really have any consuming code use-cases, so it's unlikely to be a source of anything useful (more likely spam / metacrap)
#[aciccarello]The challenge is that OGP is inconsistently used, with most sites only optimizing for social media preview so anything beyond title, summary, and maybe an image is hard.
#[tantek]that's right, that's why metaformats only looks at those few
#[tantek]good proof of the point that only real world consuming code use-cases help make a standard/format actually "work" consistently
#[tantek]and the json-ld data-island consuming code use-case is unreliable (Google SERPs) so that too has questionable (or spammed for SEO) info
#[aciccarello]I don't even know where you'd begin with json-ld data. There are too many ways to define it and like you said, it's basically all for google parsers.
#[aciccarello]I should really do some analysis of meta tags on some major sites
#[aciccarello]I have a hunch twitter cards aren't really necessary. Most of the time OGP and title/meta=description tags will suffice. I could be wrong though.
#[tantek]agreed which is why Twitter Cards barely show up in metaformats
#gRegorThe only Twitter-specific one I've ever published was `name="twitter:card" content="summary_large_image"` so it would show the og:image larger, above the title and description. I think otherwise it defaulted to show the image smaller on the left next to the title and description
#gRegorSo that part's not really useful for metaformat parsing
#gRegorIgnored publishing their twitter:image, og:image worked fine