tantek, codenamedmitri, singpolyma and Lancey joined the channel
#beargit vulnerability that allows remote code execution when handling large filenames/paths -- please please please upgrade both your server side and client side git version to 2.7.3 or better
#bear[kevinmarks] you may need to run brew doctor and also upgrade homebrew itself to get the version that knows how to handle the OSX permissions change that happend in elcapitan
#LoqiThat! (or "that ^" or "that ^^^") is a rarely seen reply often emphasizing agreement with a This post, but sometimes[1] merely emphasizing agreement with a previous reply http://indiewebcamp.com/that
#aaronpkthey push you to the editor once you sign in to create your first article, but then encourage you to connect your CMS
#aaronpkalso you can't put ads in your articles unless you are pushing to it with the API
#aaronpkbut hey if someone wants to read the apple news API docs and summarize on the wiki how to use it to create a post, I would be thrilled :)
#aaronpkin the mean time, i gotta fix my micropub endpoint which i inexplicably broke between last night and this evening despite not changing any micropub endpoint code today
#GWGDo any of you know anything about Chris Aldrich, the one who wrote that popular post that is making the rounds?
#aaronpkat the last minute today, I upgraded a bunch of libraries i was using. I did check whether there were any updates bigger than a patch release (http://semver.org/) and there were none, so I thought everything would be fine.
#aaronpkbut it turns out in a patch release, Laravel decided to subclass the Symfony UploadedFile class, which was how I was detecting whether there was a file in the micropub request.
#aaronpkso now I am checking whether the value is a subclass of UploadedFile
#aaronpkit just feels like an overwhelming task to write full tests for my whole site. I much prefer writing tests for specific libraries I'm using because they are easier to wrap my head around.
#aaronpkthis all came up because I was trying to change the way I displayed event dates, which is handled by a library, so I pushed out a new version of that library
#aaronpkand in the process of installing it, I let it upgrade anything else that had updates available
#aaronpkwell there were two fails. client side i was seeing memory warnings, and sometimes the "app" would just stop, but not crash with anything in particular
#KevinMarkshm, I suppose the otehr way to do a sanity check is to have another place you can micropub to to check
#aaronpkat one point I did get a full-res version onto the server, but it didn't multipart-decode properly so the server rejected the file. pretty sure that was also a client-side fail, since i've uploaded large photos like that other ways
#bearI ended up having to use docker-compose to run a test server and then run my tests against it
#aaronpkinteresting, looks like that's something that Laravel does
#aaronpk$request->url() does not include the slash when it's serving the home page
snarfed, codenamedmitri and shevski joined the channel
#bearthe slash-no-slash bugs the heck out of me when i'm writing webmention code
#aaronpkthere's no issue for regular URLs, only for bare domains
#aaronpkactually there isn't an issue for verifying the target param either
#aaronpkif the webmention says target=http://example.com then you look for that literal string on the page, and it's considered invalid if the page contains the string http://example.com/
#bearI may be causing my own grief then, my "url normalization" code strips trailing /
#aaronpkgRegorLove: the only problem with preg_match is you can end up matching text that isn't actualyl in the HTML, like inside an HTML comment for example
#gRegorLoveTrue. Weird edge case I'm willing to live with for now though.
bengo joined the channel
#KevinMarksbase is only going to be an issue for same site wenbmentions, isn't it?
shevski joined the channel
#voxpelliKevinMarks: yep, but I do want to support them
#KevinMarksso thta may be an optimization. If you find the actual URL in the text, assume it's good. If you don't and they're same domain, do the more complex parse?
friedcell joined the channel
#voxpelliwith my structure I have separated out the parsing of the page from the matching so all data is already parsed and done when the matching occurs
#voxpellithinking is that I should be able to move parsing to eg. Amazon Lambda and have that part be very easy to scale up
#aaronpkyeah the trick is i don't want to show a tombstone post on my home page when i delete something there, so it's almost like i need a separate URL that can show changes that the reader can consume
#bearI would think it would have to fetch the post permalink to get the GONE error (even if only when given an update webmention)
#voxpelliIf I consumes something from a feed I would like to get updates and deletes from there as well, so would like tombstones
#aaronpkis rel=updates or rel=changes/changelog a thing?
#aaronpkthe other problem with deletes is you might be deleting something that is no longer on the first page
#aaronpki was thinking a separate page that lists changes, so I could make an update to an old post, and then show that post in this separate page. and if i deleted something, would include a tombstone of it there
#voxpelliwonders if a fat ping could contain just an updated/deleted post without clients loosing it
#aaronpkthat sounds actually quite a bit more difficult for me to generate
#bearI may be confused... if my site deletes a post wouldn't I generate a webmention to references in that post just like on create?
#voxpellidepends on whether one interprets the PuSH "topic" as strictly the HTML in the topic URL or as the resource represented by the URL. I would say the latter
#aaronpke.g. my topic url is https://aaronparecki.com/ and I want to update a post that appears on the third page, but I don't want to re-order the posts on my home page
#voxpelliaaronpk: the client should be listing content based on published date so it should work from the start
#myfreewebwhy not send PuSH notifications with topic == full post url?
#voxpelliif you push just the updated post then the client should update any existing instance of that post or else add a new one with all data, oncluding original post date, and sort correctly
#voxpelli"just the updated post" as in "The hub MAY reduce the payload to a diff between two consecutive versions if its format allows it."
#myfreewebPuSH hubs should have a way to subscribe to a wildcard like domain.tld/*
#myfreewebreader subscribes to example.com/*, publisher sends ping with topic example.com/my/old/post, reader gets notification and refetches the post
#voxpelliyeah, and/or maybe a way to indicate in a ping what aspect / subpart it is that has been affected
#voxpellithe topic in itself just represents a resource I think – and such a resource can represent ones entire blog, so it probably already is pretty much a wildcard
#myfreewebwhy not keep the "just refetch" simplicity with push
#voxpellimyfreeweb: you need to know what to fetch if you want to fetch and you need to know what you received if you got a fat ping
#voxpelliand you need to know why you got it, hence the topic
#myfreewebhmm so you won't get topic=example.com/my/old/post, you'll get topic=example.com/*?
#voxpellione could argue that if the post is so old that it doesn't appears in ones feed, then it's not very likely to be updated or deleted, so in practice it's rarely a problem for a dedicated feed
#voxpelliwith a mf2-feed it's a different issue as one rarely has like 100 items in such a one
#voxpellisingpolyma: is gnusocial using XML or JSON activitystreams btw? I think one major obstacle for tombstoning in older status.net was that PuSH didn't support them
#singpolymacould also as in #social -- MMM-o probably knows
#Loqisingpolyma meant to say: could also ask in #social -- MMM-o probably knows
#voxpellias PuSH parsed the feeds and extracted the new/changed entries and tombstones aren't entries
#voxpellifor h-feeds it would be nice with an h-entry with a dt-deleted property or similar + a way to broadcast such a change if the post has dropped of ones frontpage
#voxpellineeds to do a indie-reader so he can dogfeed it
#voxpelliI'm thinking that one should do PuSH-pings whenever ones resource is updated – no matter if something is added to it, changed or deleted from it
#singpolymaMMN-o has confirmed in #social, GNUsocial handles deletes by publishing and activity:verb of deletion
#aaronpkvoxpelli: i just don't see how you would know that a post on the third page of my home page feed has been deleted
#voxpelliaaronpk: doesn't every post has a unique id that clients deduplicate on?
tantek joined the channel
#voxpellibut perhaps not that you referred to – tricky thing is mentioning what part of ones resource that has changed if it's gone past the first page
#voxpellibut just adding some rel that points that out would be enough, no?
#aaronpkmediawiki solved this by having a "recent changes" page
begriffs joined the channel
#ben_thatmustbemehmm likes from woodwind clearly don't work on my site at the moment
#voxpelliI'm thinking that this applies to any rel-next/rel-prev page – one should be able to subscribe to all pages as a whole and get notified what single page/part it is that has been affected
#voxpelliso that a ping would have rel=self to indicate its context, the topic that the update comes from, and rel-canonical to indicate the preferred URL to fetch the pushed resource from
bengo and squeakytoy2 joined the channel
#aaronpkat that point it sounds like you're adding an "update" command to PuSH
#voxpelliso if one pings page 3, then one would do rel-self=/resource and rel-canonical=/resource?page=3
#aaronpkcanonical doesn't sound like the right term to use there
#voxpelliwell, canonical is specced like "Designates the preferred version of a resource (the IRI and its contents)." and self is specced like "Conveys an identifier for the link's context."
#voxpelliaaronpk: rel-canonical only states which URL is the preferred version of the URL of your third page, the pagination you would indicate in your HTML with rel-start/rel-next/rel-prev, no?
#voxpellitricky thing is to find a way to provide pagination data in a shallow ping, in a fat ping one could provide that data within the push payload
#aaronpkit seems there are essentially two different ways of handling this, so i'd be curious to write them both down and getting feedback on them from others who are actually building readers and publishers
#aaronpksince you don't fetch the post URLs, in order to delete a post in woodwind, there would need to be a placeholder in the feed that indicates it's deleted, right?
tvn and friedcell joined the channel
#kylewmyep, unless we come up with another mechanism
wolcen_ joined the channel
#voxpelliaaronpk: kylewm: would dt-deleted make sense? to accompany dt-published and dt-updated ?
#aaronpki retract my previous preference for u-deleted pointing to the post's URL. dt-deleted + url makes sense, however there still may be a desire/need to indicate a post is deleted without specifying the datetime of when it was deleted.
#kylewmthat's a good point tantek. it exposes a different problem.
#kylewmaaronpk: i guess the alternative would be having the reader subscribe for updates to each indiviaul post, like voxpelli was suggesting earlier (I think, got confused on the rel-canonical stuff)
#kylewmso the reader would only get updates about a deleted post, if it had already seen it, i.e. the damage was already done
#voxpelliif it requires extra subscriptions then very few will likely bother to support it, but if they get the notifications anyway, then making something usable of them is a very minor addition, so I think dt-deleted/u-deleted makes sense for feeds
#voxpelli(the rel-canonical was just a weird brainstorm)
KartikPrabhu joined the channel
#aaronpkyeah subscribing to each individual post seems unlikely to happen, although another benefit of that would be you could show comment threads and update like/repost counts easily
begriffs joined the channel
#KevinMarksthis rel=updates thing sounds like an activity stream
#myfreewebif push hubs had wildcard support, readers could easily subscribe to all individual posts at once
#KevinMarkshm, for a site that supports arbitrary posting, could you make an auth-less micropub endpoint?
#myfreewebarbitrary posting? like accepting random spam?
#KevinMarkseg svgur.com lets you post arbitrary image + name + summary
#KevinMarksadmittedly I do it with google's file upload thing, so it has fugly submit urls
#myfreewebwell, yeah, of course you can have authless micropub
#myfreewebthe spec even says SHOULD, not MUST for OAuth/IndieAuth
#myfreewebbut current clients are designed for IndieAuth
#aaronpki have some clients that don't support IndieAuth, and I just copy/paste a token into them
tvn, Mutter, Gold, mcclearen, shiflett, mlncn and Pierre-O joined the channel
#kylewmhmm, not including required information in slugs doesn't help the delete question i had earlier. if i post kylewm.com/2016/03/16/b1/sensitive-info and then later issue a delete for kylewm.com/2016/03/16/b1, the reader won't know those are the same unless I redirect, and then I have the same problem as before
#KevinMarksthe slug problem is common to wordpress and known for passworded posts too
#bearthis is where the architect in me starts chanting/muttering: guid guid guid!