tantek.comedited /reacji (+590) "t using reply posts now for reacji, with in-stream reply-contexts! move display and sample markup from brainstorming to how to, why" (view diff)
[jgmac1106], [chrisaldrich], tantek, leg, snarfed, chrod and eli_oat joined the channel; chrod left the channel
LoqiSebastiaan and I were on a philosophy bender, apparently. One of the concepts to
come out of it is the head cache. When you have filed something in the back of
your mind for future use.
The problem, just like with a computer cache, is that ...
ZegnatI am pondering how to extract the h-entry from such a permalink correctly. My first instinct is: run the mf2 parser on the whole page, find the object where the url property equals the URL of the page with matching fragment.
ZegnatBasically copying the behaviour of finding a representative h-card for a URL: find the h-entry on this page that represents the given URL (which happens to contain a fragment identifier)
jeremycherfasI currently have `<link rel="authorization_endpoint" href="https://indieauth.com/auth">` in my HEAD. I guess I need to get rid of that, at least temporarily, to try selfauth?
jeremycherfasRight. Wondering how best to test. Locally, and then open via ngrok, or in production. In some respects production is easier, even if slightly more dangerous.
skippyi just recently switched from indieauth.com to selfauth. it's as easy as changing that header link, and then logging in from your apps again. Or at least, it was that easy for me.
schmartyZegnat: really interesting challenge about fragment permalinks and authorship! i put fragment IDs on posts on ghostparty.today and include invisible author data in each: https://ghostparty.today/#2018-02-17-150018
Zegnatschmarty, yeah, trickery required. This is why I recommended against the use of fragments for permalinks at IWC Berlin, because I was pretty sure the tools aren’t ready yet. Much rather be the experimental case myself than someone who is just thinking about starting on the indieweb
sebsel, maingo, [davidmead], tantek, AngeloGladding, snarfed and jmac joined the channel
Loqi[jmac] So the out-yonder website that linked to my blog which I was excited about yesterday, as it gave me a real-world non-Bridgy Webmention source to test? It's hosted by Tumblr, and therefore the URL that links to my site is http://t.umblr.com/redirect?b...
snarfedjmac: what's the source of that wm? bridgy mostly handles the t.umblr redirects (it wraps all urls), but yeah, it technically has to send to the wrapped link, not to the final url, due to the wm spec
jmacSo the mention-sender would set the wm's source to (in this case) http://t.umblr.com/redirect?blah, and my receiver would go ahead and load it, even though it's not in my domain. And if the *ultimate* URL it ended at was one it did in fact care about, then it's a legit wm. Right?
jmacThe oddity here is the necessity of loading the target URL, which is at an unfamilliar domain, but we want to see if it'll redirect us to the domain we do accept wms for.
jmacAnd I was thinking I could use it as a test case for manual webmentions. Then I saw that my current implementation wouldn't work, as written, because that page does not literally contain my own URL anywhere on it. The end
aaronpkso yes if you got a webmention with that source URL and the target was your own URL, that webmention *should* fail validation since your URL is not actually present on the page
jmacSo the solution, such as it is, is to have the target be that wacky Tumblr-redirect URL, and then have my webmention-processor go through the contortions described earlier.
jmacWell, assuming I have done the right thing, the processor is running asynchronously from the receiver already. So having another avenue to receiving bogus webmentions for bound the garbage can isn't really much of a practical risk, yes?
Loqi[aaronpk] This actually isn't as bad as it sounds. Right now, you can send webmentions with these URLs and still fall within the spec, but only because the spec doesn't specify criteria on whether a URL should be "[supported by the receiver](https://www.w3.org...
aaronpkone of the same reasons I am intentionally not opening up Aperture for signups. I want there to be more choices of microsub servers instead of everyone using the one I run.
snarfedand yeah to the diversity point, i'm also sad that there aren't any other meaningful backfeed implementations for the major silos. if i had put less work into bridgy, there might be!
tantekthanks schmarty! now I just have to get around to packaging up and releasing the PHP functions I wrote for auto_url_summary (the human readable text synthesized from known structures of silo URLs) and is_one_emoji
tantekwill probably drop them into a file like cassis-lab.php since they haven't been tested across JS and I have no plans to use them clientside, though no objections to CASSISifying them especially if there is demand
jmacContinuing from this morning's discussion webmentions and redirects: I've dug a little further into Tumblr's own redirection stuff, and if you request a redirection-service URL that it gives you, it returns (as HTTP 200) a tiny document with <meta http-equiv="refresh"> and javascript-based redirection. So, not HTTP-level redirection.
Loqi[gRegorLove] Alright, guess I chased that rabbit trail more than was necessary. After testing the rel-me library locally, I learned that cURLing the t.co links is returning the proper redirects. So it was just a case-sensitivity issue after all.
I verified the ...
schmartydgold: eep that sounds very broken! morris *should* be creating a new data/mentions/XX...XX.json file for each incoming webmention, then updating index.json to add the new mention to the list of mentions for the target page's path.
Zegnatschmarty, if you are depending on the publish file, make it a require rather than include? (Completely unsolicited programming advice, I just read a thing in the logs...)
jmacsnarfed: thanks, yes. I hate the advice "just special-case the domain" for the usual reasons, but... it's tumblr, and I guess it's an acceptable trade-off to have 10 lines of code account for 800 pounds of gorilla
[cleverdevil]So, its for the entire storage. I got a version working with S3 only, but I had to create a ton of index files, and do a lot of round-tripping to get it all working. It ended up being super slow.
[cleverdevil]S3 is tantalizingly close to being able to be a full-fledged JSON document index and store, but its *just* missing a few features. I expect that may change over time.
ZegnatWe just aren’t exposing it, because logging things to system logs unless you know what you are doing and have access to them yourself is generally not a good idea
Loqi[Charles Lecklider] Description
fail2ban is one of the simplest and most effective security measures you can implement to prevent brute-force password-guessing attacks.
WP fail2ban logs all login attempts – including via XML-RPC, whether successful or not, to syslog ...
Loqi[Federico Rota] Description
This plugin writes the log of failed access attempts (brute force attack) and invalids pingbacks requests ( by xmlrpc.php ). Very useful to process data via fail2ban.
You can activate the log for each pingback request feature and stop t...
skippyi run my own server, though, so i could read syslog. i just generally like to keep things a little more segregated. userspace stuff writing to syslog seems wrong.
ancardaskippy, Zegnat: I’d be very willing to write a more extensive logging platform (syslog, email, or log file). Possibly in a future version of selfauth?
ancardaskippy: I don’t use fail2ban, do you know if it can be configured to read from any file? Perhaps you could point it at /var/log/(messages|syslog) and include some kind of filter/parsing code to identify the IndieWeb lines?