#[shaners]kylewm: I get the gist of what its supposed to do (well named, and all) and don’t actually need to use it. I was just curious since php.net didn’t have anything for it.
#[shaners]and as best as I can tell, it’s not being loaded in from an external lib. (but I don’t know php well enough to know what i’m looking for)
#kylewmyeah I agree with your assessment, I don't see it being loaded via composer or an explicit require. not sure where it's coming from
#miklb[shaners] I'm pretty much where you are at, but looks like it might be a function of a non-standard extension URIUtilsExtension
#kylewmbut they might well put the <link rel> in their template so that it shows up on every page
#GWGBut if the server doesn't announce it, it isn't the client's fault.
#kylewmI think 400 is fine in that case, like, nobody made a mistake, you're just explicitly saying that the webmention was rejected
#GWGSo, is it worth considering removing the endpoint link from pages that are not able to accept them?
#sknebelyeah, if the server claims to accept WMs and then doesn't it is strange. from the server perspective it is a client error, but the client couldn't know it is wrong
#kylewmGWG: good point yes you're right. though that's maybe a bug in bridgy and not something you should necessarily design around
#GWGkylewm, but it is an area worth exploring with Bridgy as one possible use case
#sknebelkylewm: does it refetch after a failing WM?
#kylewmsknebel: it tries to resend failed WMs a few times, with exponential backoff
#sknebel(does aaronpk use bridgy? if yes, how did it work with his experiment with expiring endpoints?)
#sknebelkylewm: I meant, on failure, does it try to re-discover a new endpoint?
#sknebelor does it back-off long enough for the cache to expire?
#sknebelif you haven't gotten any complaints endpoints probably are quite stable in practice
#kylewmaaronpk's webmention endpoint doesn't expire anymore. iirc, at the time he was testing it, he did give bridgy a non-expiring one. i don't remember how exactly
#kylewmsknebel: it doesn't explicitly clear the cache and try sending the webmention again right away, but yeah the exponential backoff should give it plenty of time to clear out normally
#GWGI just want to better indicate to the sender not to give up on the domain
#KartikPrabhuis this actually a problem or are we guessing?
#sknebelGWG: I feel like that would be the cleanest approach. it avoids unecessary work for both sides (not sending/processing invalid WMs) and if something caches a negative discovery result that's a bug on its side
#tantekkylewm: somehow I left that out of my commit at some point? weird.
[shaners] joined the channel
#[shaners]tantek: Do these two functions exist anywhere else (and were copy/pasted into the parser)?
#tantekif I could shove all the gitignor, package phpunit post-process meta-chaff into "chaff" dir I would
#tanteknone of that stuff is relevant enough to be top level
#[shaners]I agree. And all the Filefiles. I’ve long wanted a config or settings or misc or whatever directory for the files-for-computer.
#[shaners]“/lib" is an existing Rubyland thing. It’s not worth the effort for me to fight it. I get more for free by using the conventions than I would get by doing it another way.
#miklbhonest question, does that get_absolute_url need to be extracted for php as well or does is it somehow referenced from cassis
#Loqitantek meant to say: miklb yeah, because .js is more trendy
#tantekkylewm link-rel-parser-php is just an extraction of a few open source functions from Falcon. Falcon depends on cassis
#tantekI open sourced them because I looked at existing PHP link header / rel parsing code and it was all super-over-bloated object-class hierarchy tons of files crap
#tantekand so I wrote my own probably 1/10 the size and # of files. e.g. 1 file. zero classes/objects
#tantekand the published it because it was so much smaller than existing php libs
#miklbfair enough, then that still would need to be added to the link-rel-parser-php repo. I think
#[shaners]tantek: is link-rel-parser also meant to have get_absolute_uri() in it too?
#tantekI don't pretend to understand or use any of the composer / package type stuff
#[shaners]tantek: I think we all understand that parts of cassis got extracted to become link-rel-parser. What we don’t understand is how link-rel-parser loads get_absolute_uri() from cassis if link-rel-parser doesn’t load cassis anywhere in the repo.
#[shaners]So, what you’re saying is that link-rel-parser depends on cassis?
#tantekif I'd had more time I'd have made http_rels and possibly also head_http_rels work in js as well as php and put them into cassis directly but I didn't so I just shared them separately
#tanteksince at the time I wrote them, I pointed out that I did, and was asked to share them (I think by aaronpk )
#tantekas you observed, http_rels uses get_absolute_uri which is defined by cassis.js. thus whatever code includes http_rels must also include cassis.js
#tantekI don't like the term "dependency" because it is overloaded and vague
#miklbif something won't work without something else, what would be a good term?
#[shaners]Seems like link-rel-parser should include cassis itself, so that it works “out of the box” without additional configuration etc.
#tanteknope. probably less work for me to make the functions work natively in cassis than deal with "out of the box" packaging stuff
#tantekso the code is provided as-is because that's how it was requested
#tantekbut since the choice was wait because it wasn't done, or provide something people could use with some work, I chose the latter
#tantekor people can do A LOT more work to try to use the overcomplex other PHP "libs" for parsing link headers
#[shaners]I appreciate that you wrote this code and open sourced it. It saved me a lot of thinking on this problem. I was able to just think about the php->ruby conversion.
#tantekmore source, more files, more objects, more documentation to read etc. etc.
#tantekand right - because I wrote SHORT source with minimal meta-packaging-chaff, it was more readable for you
#tantekand that readability made it more convertible
#[shaners]Well, to be frank, all of your one letter variables were less readable. :wink:
#tantektoo much of my legacy C/Assembly programming is showing sometimes
#[shaners]My confusion was/is how this library would work for anyone who used a base url. Since cassis isn’t loaded by link-rel-parser. And since it isn’t documented anywhere, say in composer.json or in the README or in the src file that cassis is needed for get_absolute_uri to work.
#tantekprobably good to add something to the readme yeah
#[shaners]My bet is that aaronpk hasn’t noticed because no tests actually hit that code path.
#aaronpkgotta figure out how to surface those errors to myself better
#kylewm(i'm open to suggestions... to notify you when the token expires like bridgy does, you need FB Canvas permission which is too hard to get)
#snarfedhuh. we got an explicit permission for that?
#snarfedor did we just have to configure the app for canvas? which is also annoying - i basically redo it every time i apply for a new perm, than undo immediately after - but still
#voxpellimiklb: I wonder if it would make sense to make enable my webmention endpoint to send a micropub request with each mention to the site that received it so it can publish it wherever it chooses?
#voxpelliprobably won't make it a high priority though as I think it's enough to have the mentions be curlable
#miklbIf I understand that correctly, could have a plugin like what I'm using for webmention.io currently to pull in the webmentions into a cache file?
#voxpelliI think that would add unwanted complexity to my site, I want my site to be about my content and defer enhancements with references to mentions to external services
#miklbvoxpelli it seems that Aaron Gustafson's webmention.io jekyll plugin could easily be modified to use your api endpoint and allow received webmentions be cached on build and displayed that way for people that want that option.
#voxpellikylewm: one can download backups from herokuapp + host the herokuapp oneself :)
#voxpellimore thinking if there's an advantage of having it directly in the html vs having it loaded with javascript
#miklbbut an automated backup on build isn't a bad thing. I use a combination. I serve the js for any mentions between build/deploy, but cache the mentions and display them
#voxpelliI see the realtime of the latter and the decreased complexity of the main app, the separations of concerns, as big wins
#voxpelliit's a tricky question, hard to frame I think
KartikPrabhu and jciv joined the channel
#kylewmcould a php person check if their system has xmlrpc_decode? (mine doesn't without apt installing php5-xmlrpc, but travis ci seems to have it by default)
#singpolymakylewm: same as you I need php5-xmlrpc for that function to be defined
rrix joined the channel
#voxpellikylewm: mine has it, installed through Homebrew-PHP
#tantekaaronpk: for archiving the webmention rocks test results/comments, I'd suggest putting them in an "Archived responses" section at the bottom, perhaps clustered by implementation
#tanteks/at the bottom/at the bottom of each test page
#Loqitantek meant to say: aaronpk: for archiving the webmention rocks test results/comments, I'd suggest putting them in an "Archived responses" section at the bottom of each test page, perhaps clustered by implementation
#miklbvoxpelli I'm going to switch over to herokuapp so I can better test/explore the implementations. Also in case decide to incorporate the micropub with it
#tantekmaybe keeping 2-3 per implementation, e.g. looks like we have (1) Known, (2) WordPress (plus what plugin(s)?), (1) Nucleus, (1) Falcon using link-rel-parser-php, and (1) Ronkyuu
#tantekKevinMarks: good question - do we distinguish just showing up on the test page without name/photo/content/permalink compared to all the above showing up?
#tantekand that's a hand-authored static page right?
#KevinMarksyes, but the webmention sending is a separate implmentation
#KevinMarksI could hand-author a static page on kevinmarks.com and send it the same way
#KevinMarkshm, also should test indiewebify.me sending
#tantekkevinmarks - I think especially with proxy-like services, it helps to use a separate domain, and then document on your post how it worked, because it's not as obvious what's going on (what implemented what)
#tantekplus does mention-tech support sending updates? hoping that you sending an update would fix the noname/nophoto/nocommenttext problems on the resulting comments on webmention rocks
#aaronpkkylewm: xmlrpc is an extension. i ended up just building the XML by hand for mention-client to not rely on that being on the server
#voxpellikylewm: aaronpk: XML-RPC always answers 200, don't it? That's kind of the thing with RPC, that it just uses HTTP as a transport and uses it's own format for everything else
#aaronpkxmlrpc_decode isn't guaranteed to be installed, and likely the same for other libraries
#Loqi[brid-gy.appspot.com/post/twitter/mention_tech/707770354840838144] a month agobrid-gy.appspot.com/post/twitter/mention_tech/707770354840838144 mentioned mention-tech.appspot.com/ ✅ a month ago None
#kylewmso if the pingback function really just returns true or false for success, then we can just check for the presence of <fault> in the response, i think
#aaronpkhuh yeah i don't know. apparently the value of the u-logo counts towards the value of the name since it's inside the p-name
#aaronpki don't think it knows it's a url at that point, since it's *inside* the p-name tag
#kylewm"else return the textContent of the element, replacing any nested <img> elements with their alt attribute if present, or otherwise their src attribute if present, resolving any relative URLs, and removing all leading/trailing whitespace."
#tantekKevinmarks - in situations like this, always try microformatshiv - it tends to have the most spec-precise results because of the incredibly thorough test suite that Glennjones wrote up with it
#gRegorLoveis working on implementing that test suite for php-mf2, btw
#KevinMarksright, we need to get back to iterating those in python as well
KevinMarks joined the channel
#KevinMarksa parsing thought - if we do fall back on the src of an img inside a p, should we wrap it in spaces?
#GWGTo use an analogy, if webmentions are plumbing... I want to use the best pipes because I'm afraid that the people buying the house... (WordPress Core) won't want it if it isn't structurally strong.
#GWGI'm going to take a break from this obsession for the afternoon and go back to obsessing about work.
#voxpelliI think version numbers + urls in user agents are pretty good because then one can easily pinpoint a bad-behaving client and give them feedback on how to improve
#aaronpkI would recommend using a user agent that describes your software, nothing specific to webmention. It can be used for any other http request you might be doing too
#aaronpklook at how slack and other sites set their user agent for doing link previews
#tanteklike there should be no difference in UA between when curling a pingback source and a webmention source
#tantekso what's your existing practice when curling a pingback source?
#aaronpksame for fetching images if you're running an https image proxy
#aaronpkSpeaking of which I need to do that for webmention.rocks
#LoqiUser-agent is a common HTTP header that generally indicates the name, version, and a URL for the application making the request https://indiewebcamp.com/user-agent
#tantekvoxpelli: can you document perhaps "Software Examples" there with what you know of what WordPress does currently?
#voxpelliGWG: ^ perhaps you are better at that? I just retold what you posted in your issue :)
#GWGI have to head back to work. Will try to Tonight.
#kylewmwould be kinda nice to have an https image proxy that we could all use (including the wiki)
#voxpelliaaronpk: how does the lambda one serve the stored images?
#kylewmoh I remember, ca3db actually has like an upload step, you can't just construct a URL at render time
#aaronpkThe lambda app just puts it in an S3 bucket
#aaronpkkylewm: it's not an upload, more of an API call
#aaronpkyou tell it the URL you want to archive and then it gives you back an S3 URL
#voxpelliaaronpk: so there will be two different URL:s for the S3-one and the lambda-generated one or can API Gateway handle that a miss in one results in a call to the other?
#KevinMarksexcept the HTML is really strictly defined
#miklbsaw some discussion earlier about serving images, recently came across https://www.imgix.com and what I liked is it will serve from your own S3 bucket but provide optimized sizes available on fly.
#miklbmaybe not "serve from you S3" but pul from S3