#dev 2021-06-12

2021-06-12 UTC
[chrisaldrich], samwilson, [jeremycherfas] and [KevinMarks] joined the channel
#
[KevinMarks]
The slicing makes sense if you expect people to not watch all of it too
#
[KevinMarks]
Another trick is to upload to YouTube so they encode it then download it again, as they have had a team of experts tuning the encoding quality/bit rate for a while now.
nsh, nertzy, [KevinMarks], [schmarty], barnaby and capjamesg joined the channel
#
capjamesg
Hello IndieWeb! Could someone please send a webmention to https://jamesg.blog/printer/? I'm going to be testing my webmention printer tomorrow and will need a recent webmention (and I'd like my first message to be from a community member!).
#
barnaby
indieauth question: according to the spec, an authorization code MUST become invalid once exchanged for an access_token
#
barnaby
if the exchange request results in an error, should the authorization code remain valid?
#
aaronpk
good question, let me see if the oauth spec says antyhing about this
#
barnaby
I’ve been reading through it myself and couldn’t find anything
#
aaronpk
noticed that since it's marked as updating oauth 2.0 core
#
barnaby
although I did find some interesting information about required caching headers which I just added to my tests
#
aaronpk
oh you should ignore the Pragma header, that was a mistake
#
barnaby
good to know
#
barnaby
but the Cache-control one is still required?
#
aaronpk
yeah that one is still a good idea
#
barnaby
out of interest, how is anyone supposed to know that, other than having access to a helpful aaronpk to tell them?
#
barnaby
is there a big list of errata somewhere?
#
aaronpk
there is, i'm not sure if this one made it in tho, but we are fixing it in OAuth 2.1
#
barnaby
which you have to read in addition to the many oauth specs
#
Loqi
[adeinega] #32 Remove the Pragma header.
#
barnaby
I suppose I’m just spoiled by being used to living standards where stuff like that can be updated
#
aaronpk
yeah IETF RFCs can't change, but you can still think of OAuth as a "living standard" in that things get added and removed to it over time by publishing new RFCs
#
barnaby
I’ve been browsing the IETF RFCs here https://datatracker.ietf.org/doc/html/rfc8996, does the ”living” version live somewhere browsable?
#
aaronpk
no, each working group chooses where to manage their in-progress drafts. sometimes it's on github, sometimes it's not public
#
aaronpk
the only "official" drafts prior to RFCs are ones with "draft-ietf" in the URL, like the current status of OAuth 2.1 https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1-02
#
aaronpk
those are in progress and will eventually be RFCs where they'll be frozen
#
barnaby
that must be a job to keep track of
#
aaronpk
if you want to know about the state of things within a particular working group you have to get more involved and follow the group's mailing list and such
#
aaronpk
i just made this page which is a summary of the status of all the oauth group drafts https://oauth.net/specs/
#
barnaby
oh nice
#
aaronpk
that will give you an idea of all the things the group is in the middle of working on
#
barnaby
oh wow, I didn’t realise OAuth2 went back to 2012!
#
aaronpk
heh yeah it's not exactly new
#
aaronpk
my personal goal is to publish oauth 2.1 before the 10 year mark :)
#
barnaby
so reading through the indieauth spec again, the only place I can find a reference to auth code lifetimes is https://indieauth.spec.indieweb.org/#authorization-response “The code … MUST be valid for only one use”
#
barnaby
so unless a different spec expands on this, I guess the question is, is “one use” one attempted request, or one successful request
#
barnaby
also neither spec response example lists the Cache-control header in the auth code issuing redirect response, but I guess it’d be suitable there too
#
aaronpk
i just filed this so i remember to add this to 2.1 https://github.com/aaronpk/oauth-v2-1/issues/82
#
Loqi
[aaronpk] #82 Clarify what should happen to authorization codes on an error response
#
aaronpk
but my instinct is that they should be invalidated
#
barnaby
hmm I’m curious how existing implementations handle this
#
aaronpk
the reason being the only way to get a valid auth code is if you are the legitimate client or if you stole one. the legitimate client should have no reason to make a request with a valid auth code that is invalid for some reason. an attacker might be able to do more things with the stolen auth code if it remains valid
#
barnaby
yeah that’s a good argument for invalidating an auth code on an invalid request
#
barnaby
expires_in: (seconds) is the official way of communicating token lifetime information, right?
[jacky] joined the channel
#
[jacky]
IIRC yeah according to the spec
#
barnaby
I’ve been using valid_until: epoch seconds, but if expires_in is official then I’ll change up my default implementation and interface documentation to use that
#
[jacky]
I kind of wish there was a 'expires_at' versus 'expires_in'
#
barnaby
token lifetime is an implementation detail left up to the consumer in my library, but I want to at least point people in the right direction
#
aaronpk
there was a whole debate about this
#
barnaby
yeah, expires_at/valid_until was my intuitive first solution too
#
[jacky]
wonders if it's like (I don't wanna require a datatime parsing lib) lol
#
aaronpk
the general consensus was that it's likely that an oauth client has the wrong local time on the device so a relative offset is more reliable
#
[jacky]
damn that makes sense lol
#
aaronpk
considers making an FAQ section on oauth.net to capture that
#
barnaby
ugh wow such mailing list UX
#
[jacky]
tbh it's better than mailman stock
#
barnaby
but somehow worse than, say, phpBB
#
barnaby
which is 21 years old
#
aaronpk
needs some design love but it's a start https://oauth.net/faq/
#
barnaby
nice aaronpk++
#
Loqi
aaronpk has 50 karma in this channel over the last year (150 in all channels)
#
[jacky]
yup that works 🙂
#
aaronpk
waits for the SEO juice to kick in
#
mgdm
I was going to build a webmention receiver using rust and sqlite, as I do not get any traffic
#
mgdm
and my site is static, using zola
#
[jacky]
Mgdn: I literally did that lol
#
[jacky]
It has support for pulling webmentions too but only in mf2+json and jf2
#
[jacky]
It also would work for multiple sites (my use case would be an endpoint for each little site I want to track mentions for)
#
[jacky]
I'd be happy to help you set this up!
#
[jacky]
mgdm: ^
LaBcasse[m] joined the channel
#
barnaby
greetings LaBcasse[m]
#
[jacky]
Frankly the mentions can be parsed in whichever format you define (it'd send the URL to something like granary or x-ray or your own tool for parsing a page) so it doesn't need to be in Microformats land
#
barnaby
so correct me if I’m wrong, but it looks like your parser is using hard-coded lists of h-* classnames and property classnames
#
barnaby
microformats2 is designed to be generic, using the classname prefixes to determine how to parse elements, but not needing hard-coded lists of property names
#
[jacky]
Tbh I would have done the same re: class names to base the tests and then refactor to see if it still works with the expected classes (and then throw a wrench 🔧)
#
LaBcasse[m]
Hello
#
barnaby
one of the reasons for this was that a lot of the classic microformats parsers were necessarily based on hard-coded classnames, and they were a lot of work to maintain and went out of date quickly
#
[jacky]
whispers "why not a schema?" and runs
#
sknebel
!tell capjamesg did it print?
#
Loqi
Ok, I'll tell them that when I see them next
#
barnaby
so I’d recommend thinking about how you could implement generic parsing. I get that it’s more difficult in rust, especially if you want to end up with nice rust-like data structures rather than a big hash map
#
[jacky]
That shouldn't be too hard with the [serde(flatten)] approach
#
LaBcasse[m]
I wanted to use a struct with known field for the h-* element. So, I need to hard-code the properties.
#
barnaby
[jacky]: did you have any ideas about how to handle generic mf parsing in rust? my intuition would be to have the parser build a big hash map, and then have consuming code which turns it into a strongly typed application-specific struct structure
#
aaronpk
i am still looking into this indienews issue and i am very confused about what's wrong with it
#
mgdm
[jacky]: oh excellent, I'll have a look, thanks!
#
barnaby
LaBcasse[m]: that’s fine for a parser which only you use (and if that’s your goal then there’s no problem!)
#
LaBcasse[m]
But at the end you need to hard code the translation from the key to the field
#
[jacky]
Barnaby: I was thinking of leaning on serde to handle deserialization. We have some "base" properties for objects that can be made recursive and as long as it's defined with some expected things (like "type") it should work!
#
[jacky]
I'm tempted to test this
#
barnaby
but if your goal is to make something which is useful for other people, having a generic parser, producing generic data which application-specific code then consumes is the way to go
#
LaBcasse[m]
Ok, so it requires to build a two-layer parser, one generic returning a hashmap and another more static returning nice structures ?
#
barnaby
yep, that’s the pattern which mf 2 parsing code usually follows
#
barnaby
you have a parser which produces just the mf2 canonical JSON structure documented at https://microformats.org/wiki/microformats2 and https://microformats.org/wiki/microformats2-parsing
#
barnaby
which can then benefit from all of the existing test cases which use canonical JSON expected values
#
LaBcasse[m]
Ok, I can think of it. I just want to say I am a beginner in rust too, so this project is quite funky.
#
barnaby
I am very much a rust beginner, but I have quite a bit of experience writing mf2 parsers ;)
#
barnaby
so I can’t help much with the rust parts, but can probably give helpful advice for the microformats parts
#
mgdm
this is very much a newbie question but does anyone use exclusively microformats etc as inputs to a feed reader, in place of RSS/Atom?
#
LaBcasse[m]
Ok, I will consider returning a hashmap and I will learn how to introduce some tests in rust.
#
barnaby
definitely chat to [jacky] about the rust parts!
#
aaronpk
LaBcasse[m]: your site apparently does content negotiation and returns an activitystreams JSON object
#
[jacky]
I'm definitely down to help too
#
aaronpk
there isn't a "syndication" property in AS2 so it's failing to find the link
#
LaBcasse[m]
It is writefreely indeed: https://writefreely.org/
#
[jacky]
I actually have been holding off on making a microformats2 parser because I didn't want to bake it into these little apps
#
LaBcasse[m]
<[jacky] "I'm definitely down to help too"> Thanks Jacky.
#
[jacky]
They should be usable without it (but get enhanced by mf2)
#
aaronpk
so this is an interesting challenge
#
barnaby
does indienews’s content negotiation prefer AS over HTML?
#
LaBcasse[m]
<aaronpk "there isn't a "syndication" prop"> What is AS2 ?
#
sknebel
I assume it uses xray, and xray does
#
barnaby
ah I see
#
sknebel
(because you want that for some use cases. but not for others, as we see here)
#
sknebel
LaBcasse[m]: Activitystreams2, the data format used by Activitypub/Fediverse
#
barnaby
given that mastodon publishes mf2, I wonder how many sites which support AS2 content negotiation *don’t* also publish mf2
#
aaronpk
normally it's the case that if there is as2 it's more reliable than the html, mainly because of wordpress' legacy mf1 classes that continue to cause problems
#
barnaby
heh, this would actually be an argument for having some sort of text/html+mf2 MIME type
#
aaronpk
also mastodon's mf2 could use improvement
#
aaronpk
so yeah i don't know what to do here
#
barnaby
if AS2 is negotiated but no link is found, additionaly request HTML and parse for mf2?
#
barnaby
I guess that’s a bit awkward to do if XRay is decoupled from the indienews parsing logic
#
LaBcasse[m]
<aaronpk "normally it's the case that if t"> Ok, it explains my error, but there should be a failback using HTML if AS2 do not contains the syndication, no ?
#
aaronpk
quite separate, it would take a bunch of fiddling
#
barnaby
I’ve had similar issues with my mention handling code delegating to an external parsing+archiving library
#
barnaby
it seems like webmention handling is complex enough to justify structuring things such that having multiple code paths and fallbacks is possible
#
barnaby
either that, or XRay would need to do both by default and return a dataset containing all of the results
#
aaronpk
i don't like the idea of always fetching twice
KartikPrabhu joined the channel
#
aaronpk
i'd need to add a config option to XRay to tell it to not send the Accept headers, then indienews could choose to do the extra fetch
#
aaronpk
this is also somewhat unique to indienews which is expecting the link in the syndication property rather than anywhere in the entry like normal webmention receiving
#
barnaby
IMO fetching twice is the lesser evil
#
barnaby
for something like indienews, it’s still a small number of requests overall
#
LaBcasse[m]
Sorry for my corner case, maybe it is better to not fix this bug.
#
aaronpk
fetching twice for indienews i don't mind, but i don't want xray to always fetch twice every time it encounters non-html
#
barnaby
I suppose a generic solution would be to optionally pass xray a list of properties which the consuming code is interested in, and if it knows that some of them are not supported by AS2, then it negotiates for HTML
#
barnaby
LaBcasse[m]: no problem, it’s a useful bug to be aware of!
#
LaBcasse[m]
By the way, what is XRay ?
#
barnaby
what is xray?
#
Loqi
XRay is an open source API that returns structured data for a URL by parsing microformats and following other indieweb algorithms, and is part of the p3k suite of applications https://indieweb.org/XRay
#
LaBcasse[m]
Thanks to the bot and your translation barnaby.
#
barnaby
I’m not sure why Loqi didn’t pick up on your initial question
#
barnaby
maybe the space before the question mark
#
LaBcasse[m]
French typo
#
barnaby
nope, looks like it’s the comma before the what
#
aaronpk
it only works for sentences starting with "what is"
KartikPrabhu joined the channel
#
LaBcasse[m]
Good to know.
#
LaBcasse[m]
Is XRay able to read the schema.org embedded data ?
#
aaronpk
no, that data is generally too unreliable and not detailed enough to be useful for the things xray is for
#
LaBcasse[m]
True, it is difficult to reuse these data. I wonder what kind of data is useful to gather for reusing the webmention, so the webmention receiver I created gathers a lot of thing.
#
LaBcasse[m]
Here an example of the gathered data : https://webmention.buron.coffee/webmention/60c50f7c0076644a00af8ec4
#
[jacky]
mdgm: if you attempt to set it up and need help, feel free to ping me here or to sign in at https://git.jacky.wtf and leave a issue!
#
barnaby
argh I thought I had a good way of having my indieauth authorization flow be stateless but only have to fetch the client_id once
#
barnaby
but now I realised that if I want the error reporting to strictly comply with the spec, it needs to fetch the client_id multiple times
#
barnaby
as the first thing to do on any request is to fetch the client_id and validate the redirect_uri, so that other errors can be reported to the client app via a redirect
#
barnaby
hmm I suppose I can limit the number of requests by skipping fetching the client_id if the redirect_uri itself sufficiently matches the client_id
#
mgdm
[jacky]: cool, will do, thanks!
#
aaronpk
"If a client wishes to use a redirect URL that has a different host than their client_id, or if the redirect URL uses a custom scheme..."
#
barnaby
but probably the better way to do this is just to allow the library consumer to add a caching adaptor to the HTTP request callback if they want to avoid excess requests
#
barnaby
aaronpk: I was already doing all that checking, and even wrote quite extensive tests for it
#
aaronpk
excellent
#
barnaby
but I was doing it only when the flow got as far as showing the consent screen
#
barnaby
but to strictly comply with the spec, if the redirect uri is valid, then *all* error reporting should be done with error redirects
#
barnaby
which means that validating the redirect_uri, potentially by fetching the client_id, is the first thing I have to do
#
barnaby
I guess it’s not that big a deal
#
barnaby
just complicates things a bit
#
mgdm
[jacky]: I'm opting into a little bit of self-inflicted complexity by trying to deploy this on NixOS, heh, so it might be a wee while before I get it to boot :-)
#
[jacky]
Haha tbh I'm even down to take patches to help it work under Nix! I did optimize it a bit for a Docker/Dokku/Heroku-esque setup but I'm eager to see how else it can be deployed
#
aaronpk
i literally can never remember the difference between @media (max-width: 700px) and @media (min-width: 700px)
#
LaBcasse[m]
Me neither, but I also have problems with the difference between left and right ^^
#
barnaby
huh, I’ve always found them pretty intuitive. styles in the block with max-width apply when the viewport is, at most, that value
#
mgdm
[jacky]: should lighthouse work with the published version of indieweb? I note that the Cargo.toml currently points at a local checkout
#
barnaby
maybe something like (@viewport <= 700px) {} might have been more familiar to programmers
#
barnaby
but I guess there’s some CSS reason for the existing syntax
#
[jacky]
Ah crap, it should. I'm outside right now but I can publish that
#
aaronpk
i guess it's the "at most" or "at least" that i keep getting backwards
#
aaronpk
i think it's because "max-width" means "this size or smaller" and "min-width" means "this size or larger"
#
barnaby
yeah, it’s weird that those keywords can mean opposite things in different contexts
#
aaronpk
so wechat is odd, your personal QR code is actually a URL, but visiting that URL doesn't do anything useful, it just redirects to teh app website
#
aaronpk
if you scan the QR code in the app then it will navigate to your profile in the app
#
barnaby
ahhh I love automated testing. it only took a few minutes to refactor my IA library to validate parameters in the right order and always return errors the correct way, while maintaining the same test coverage
#
barnaby
well, okay, 40 minutes, but that’s still quite fast
#
barnaby
finally ready enough to post the still-WIP code to github! https://github.com/Taproot/indieauth
#
Loqi
[Taproot] indieauth: A PSR-7-compatible PHP IndieAuth Server Implementation
#
aaronpk
my website decided to send out activitypub updates from a year ago for some reason
#
aaronpk
i have no explanation
#
barnaby
ooooh I didn’t know you could set up GH pages from a project’s main branch /docs folder
#
barnaby
that’s a very cool feature
#
barnaby
to anyone interested in PHP IndieAuth/Micropub server development, I’d be very grateful for feedback about this README and the specific API docs linked from it: https://github.com/Taproot/indieauth
#
Loqi
[Taproot] indieauth: A PSR-7-compatible PHP IndieAuth Server Implementation
#
barnaby
the docs are by no means complete, but I’d be curious for first-impressions of how clear the usage examples and API docs are, and any issues you’d anticipate using the library with your web framework of choice
#
barnaby
(also, the test cases in ServerTest may be of interest to other IA server developers! some of them get rather specific)
#
aaronpk
ok my rel=me link for wechat links to the wechat URL, which as far as I can tell is useless, but I wrote some JS to pop up the QR code if you click it
samwilson joined the channel