#dev 2019-04-09

2019-04-09 UTC
dougbeal|mb1 joined the channel
#
GWG
Dev topic. Pingbacks...support or not?
#
GWG
Thoughts?
#
jacky
so I'm just noticing some sites (forceably) send them to my static site
#
jacky
and I do want to collect that info
#
jacky
but now the case falls b/c I'd have to do a bit extra parsing of it to make it usable on my site
#
jacky
and I'd have to 'translate' them into webmentions (or make some compatibility layer)
#
GWG
jacky: That is easy enough.
#
jacky
now the question is it is _truly_ worth it?
#
jacky
it'd be purely to get them from mentions in larger pubs that use Wordpress or support pingbacks
#
GWG
I use WordPress and I'm not rushing to do more with them.
[jgmac1106], KartikPrabhu, Ruxton and [eddie] joined the channel
#
[eddie]
TIL HTTP 307 might be my serverless media endpoint saving grace! https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307
#
[eddie]
Guaranteed not to change the http method type on redirect
#
[eddie]
Pretty excited about this! Inching closer to a final solution!
snarfed joined the channel
#
[eddie]
Tomorrow I literally just have to generate my S3 bucket signed URLs, put my serverless media endpoint url in my Micropub config and then run a test using both Indigenous for iOS and Quill :crossed_fingers:
[dougbeal] joined the channel
#
[dougbeal]
:crossed_fingers:
snarfed, [fluffy] and ingoogni joined the channel
#
Ruxton
aaronpk: not sure if ths interests you or not, but I wrote some code in OwnYourGram to check instagram_img and instagram_img_list for URL sig failures and update them if they are failing
cweiske joined the channel
#
Zegnat
Hmm, I do wonder how 307 works for mobile devices. Will you burn double the bandwidth or can the server somehow cut off the first upload?
#
cweiske
double, because you could also be redirected to a new server
#
Zegnat
Yeah, that was my guess too. Just wasn’t sure if there was some way for the server to bail out early and stop the client’s upload
#
cweiske
hm. there is "100 continue"
#
Loqi
100 Continue is a status-code you might not deal with very often. Generally, as a web developer, the 100 Continue status is sent under the hood by your webserver. So what’s it for? The best example comes from RFC 7231. Say, you’re sending a larg...
#
cweiske
"The big benefit here is that if there’s a problem with the request, a server can immediately respond with an error before the client starts sending the request body."
#
Zegnat
The problem in this use-case is that it seems to say you either give a 100 or a 4xx. While the media endpoint wants to provide a 3xx
#
Zegnat
Otherwise it would be perfect for sending different file types and sizes to different endpoints (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Expect#Large_message_body)
#
cweiske
no, the server can return what it wants
#
Zegnat
Hmm, yeah, looking at the RFC for expect it does not seem to rule out returning redirects
#
Zegnat
No clue what clients will do with that though
#
cweiske
it's client-initiated anyway
#
cweiske
the client needs to send "excpect: 100-continue" itself
#
Zegnat
Yes, that is what I am saying. And all exampls I am seeing will either return 100 (headers are OK, continue upload) or a 4xx (something is wrong)
#
Zegnat
Hmm, also seeing a few examples of responding with 3xx now. So maybe it’ll work.
gRegorLove, ingoogni1, petermolnar, blueyed, swentel, swentie, iasai and [grantcodes] joined the channel
#
[grantcodes]
Ha [eddie] I am impressed by your dedication to your serverless setup, seems you're at the point of finding workarounds for workarounds 😛
gRegorLove_ joined the channel
#
GWG
Morning
#
Loqi
guten morgen
[grantcodes], [kevinmarks], [Rose] and ingoogni joined the channel
#
[grantcodes]
Aha so [Rose] it looks like you can't log into together because of an activity streams / fediverse plugin? But I should be able to fix that
#
[grantcodes]
I assume it is because I have not set up the content type on my http request properly
#
Zegnat
I can make an HTTP request for rosemaryorchard.com without accept header and I still get HTML back
#
Zegnat
Where are you seeing that issue, [grantcodes]?
#
[grantcodes]
[Zegnat] I think it is probably the http lib I am using, I think it tries to set some smart defaults as it is mainly for json
#
Zegnat
Oooh, so the lib may set the accept to JSON?
#
Zegnat
Yep, [Rose]’s website will return JSON for a json Accept header
#
Zegnat
[Rose]: would you have the ability to add the microsub endpoint to a link HTTP header? I see micropub and webmention already there.
#
Zegnat
That may also fix it (no HTML necessary for parsing and finding the endpoint)
#
[grantcodes]
That is another way to fix it, but still should fix my request somehow
#
Zegnat
I agree, [grantcodes]. Just at the off-chance [Rose] is around and looking for the quick fix
#
Zegnat
(As in: faster than waiting for restructure in Together and a new launch to the public hosted version.)
[jgmac1106] joined the channel
#
[Rose]
I can add the Microsub to my <head>, but I already did
#
[grantcodes]
Eh, was only 3 lines. It is updated on the hosted together
#
[grantcodes]
You should be able to log in now [Rose]
#
Loqi
I agree
#
[grantcodes]
Loqi is confident
#
Zegnat
[Rose]: I meant to the HTTP Headers, not the HTML <head>.
eli_oat and [eddie] joined the channel
#
[eddie]
[grantcodes] 😆 Workarounds for workarounds sounds pretty accurate. It's gonna make for a fun/interesting blog post when I'm done
#
[eddie]
The good news is that the Media Endpoint seems like it'll be the hardest thing. Everything else (Micropub, Webmention processing, etc) all seems more suited for what is happening here and should be much easier
#
[Rose]
Oh, no idea how to do that on WordPress, I can try though.
#
[grantcodes]
[Rose] you shouldn't need to now. But the issue I has with not the microsub endpoint that Zegnat mentioned but the authorization endpoint was missing from the headers. Microsub may be missing too, but I didn't get that far 😛
#
Zegnat
Yes, authorization endpoint is also missing. Sorry for being unclear.
[Zegnat] joined the channel
#
[Zegnat]
[Rose] I would have expected to see microsub / indieauth-related headers there. Maybe worth filing an issue on the WP plugins.
#
[Zegnat]
In `curl -I https://rosemaryorchard.com/`, that is. (Sorry IRC users, I posted the output on Slack.)
#
jacky
I feel like this is a given (nope) but is there a way to "reassign" the user for services in Aperture?
#
[Rose]
I'm not using a microsub plugin
#
jacky
if not, I might just use a domain redirect and start working on my microsub server
#
[Rose]
The aperture plugin summary implies it only works for Aperture hosted with Aaron
#
Zegnat
Not sure I understand the questions, jacky
#
[Rose]
[grantcodes] Your wizardry worked!
#
Zegnat
[Rose]: aah, it would be up to the Aperture plugin to also add the Link header then
#
[Rose]
And voilà, logged into Together, and marked all as read!
#
jacky
Currently, I use Aperture via https://v2.jacky.wtf as the site I log in with
#
jacky
While maintaining my data
#
jacky
I think it's okay for me to declare a hail mary, felt like my feeds were a bit cluttered anyway
[kimberlyhirsh] joined the channel
#
Zegnat
Aah, no, I don’t think Aperture supports user moves. You could file a request with aaronpk though
#
[eddie]
Yeah for one off conversions, aaronpk has done it manually for some people. So asking him can totally work
#
jacky
that's fine :)
#
Zegnat
The real solution, jacky, is of course to run your own microsub server ;)
#
jacky
lol no lie that's exactly what I plan on doing
[kenbauer] and [schmarty] joined the channel
#
@ilikebeans
↩️ Maybe eventually this could also support Webmention, etc.
(twitter.com/_/status/1115633072567930880)
snarfed and rMdes joined the channel
#
aaronpk
Yeah too many edge case rabbit holes to go down to enable self service domain migration in aperture but happy to do it manually if you want
[kenbauer], ingoogni and [schmarty] joined the channel
#
@m_andrasch
Keine Zeit, damit rumzuspielen gerade, aber wäre das nicht was für OER-Nachnutzungs-Notifications? No time to play with it, but isn't that a system for notifications of reuse of #OER? #Webmentions #OpenWeb #OpenEducation https://indieweb.org/Webmention
(twitter.com/_/status/1115663143584374785)
#
snarfed
bridgy's superfeedr notifications have the oddest pattern. repeats every three days, so it's not a weekly thing. maybe some three day long cycle of batch jobs inside superfeedr, or in wp.com or blogger or tumblr? so weird. https://snarfed.org/bridgy_superfeedr.png
#
aaronpk
huh. i should log the ones i'm receiving too
#
snarfed
oh that's superfeedr's own dashboard
#
aaronpk
oh, cool
#
snarfed
has both good analytics and good individual subscription debugging tools
gRegorLove_ and [kim_landwehr] joined the channel
#
[kim_landwehr]
When connecting Quil.pko with Blot.im using Heroku trying to figure out the line <link rel="micropub" href="https://deployed-blotpub-app.com/micropub">. Is that all I need to put in or do I change it in any way. Feeling confused and probably over thinking
#
Zegnat
I do not know what Blot.im is, or how its micropub works. But generally Micropub clients need a website to have the rel="micropub" defined, like the tag you have there, as well as IndieAuth endpoints defined for login to work
[eddie] joined the channel
#
[eddie]
Blast! I think my Serverless Media Endpoint plans have officially collapsed! S3 supports just uploading a file directly via a PUT request, but a POST request requires some specific form data in the multi-part form data, which obviously no Micropub client is going to do
#
[eddie]
lol 🤷‍♂️ so no uploading directly to Amazon s3
[Rose] joined the channel
#
[Rose]
No chance of a script that "translates" and passes the request on to S3?
#
jgmac1106[m]
Snarfed what is the IP address of Bridgy I should send to my shared host to try and get bit ninja to unblock it?
#
aaronpk
You could make a lambda script that does that
#
aaronpk
that's still serverless
#
[eddie]
haha you must have missed my past posts on this channel
#
snarfed
jgmac1106[m]: is this for your timeouts? do we actually know that bitninja is blocking bridgy? as opposed to something just being slow?
#
aaronpk
Apparently :-)
#
[eddie]
Lambda (and most all serverless scripts) limit the upload payload to 5mb max
#
[eddie]
so not good for a media endpoint
#
[eddie]
hence, the s3 journey
#
[eddie]
and my media endpoint already has items bigger then 5mb so that's definitely not doable (I also would like to start posting video and audio to my media endpoint which will just grow that lol)
#
aaronpk
Hm how do other services handle this then? Plenty of sites have user generated content >5mb stored on S3
#
[eddie]
Yeah so it works as long as you control the upload mechanism
#
aaronpk
IIRC the twitter api supports a kind of chunked upload
#
[eddie]
typically the workflow is you use the amazon sdk to generate a "signed url" for s3 that provides access
#
[eddie]
then if you control the client, you can send the file via a PUT to that signed url
#
[eddie]
but of course, that doesn't match the Micropub Media Endpoint method of POST
#
aaronpk
thats interesting
#
[eddie]
the other option for people that want to do POST uploads is they include details about the s3 bucket inside the form they are using to upload the file
#
[eddie]
again, as long as you are building the app you can send s3 specific details inside that form
#
aaronpk
I know a bunch of other services are copying the S3 API to support other tools out of the box
#
[eddie]
Yeah i've seen some stuff like that, like DreamHost has DreamObjects that is an S3 compatible object storage
#
jgmac1106[m]
I got a warning. Trying to find out of it is slowing me down, just trying to turn over any rock
#
aaronpk
Does that approach have any benefits that other micropub servers or clients would benefit from?
#
[eddie]
the S3 API is pretty good, if they would just allow a POST request sent with just a file to a single url (exactly like their PUT works) everything would be perfect
#
[eddie]
haha SO close
#
[eddie]
Yeah, it really is
#
jgmac1106[m]
Though it may not be necessary seems Known's internal plumbing is picking up majority of comments and likes... Need to figure out how much the two duplicate each other... Known POSSE and Bridgy
leg joined the channel
#
[eddie]
hmmm thinking about [Rose]'s mention of "translating" I wonder if I could set up a "proxy" somewhere somehow, that literally doesn't use any code but just accepts POST requests for a url and proxies those to s3 as PUT requests. I'm not even sure if you can change http method when proxying
#
[eddie]
or even much less where I would put the proxy, because if it's on my server and it goes down again then I'm still out of luck, thus defeating the purpose of serverless
#
[Rose]
Can't it live on an Amazon something?
#
[eddie]
essentially anything Amazon based that isn't s3 rejects POST requests that are larger than 5mb
#
[Rose]
(The fact that I'm referring to an "Amazon something" is a clue I'm guessing and have no idea what service would be suitable though!)
#
[eddie]
so I could potentially spin up a super small ec2 server that could be small enough to be free maybe?
#
[Rose]
That's the one I was thinking of! It's worth a try? And worst case scenario: make an Alexa skill and get free credit?
#
[eddie]
haha true true!
#
[Rose]
"Alexa, when is the next indieweb event?"
#
[eddie]
I'll be really happy when I get past the Media Endpoint and can move on to the stuff that I know will be easy using Serverless stuff: Micropub, Webmention, Processing and Storage
#
[eddie]
The other option is I should just open up my s3 bucket to the entire world, and delete things within 24 hours lol
#
Zegnat
A close-to-the-protocol proxy shouldn’t have any issue with rewriting only the method, I guess? You would literally be rewriting only the first line of the request, and otherwise just byte-for-byte proxy it straight to the second server
#
Zegnat
Wouldn’t even need to write to disk at any time?
#
[eddie]
Yeah I just saw it seems like nginx's proxy can change the method
#
snarfed
[eddie]: out of curiosity, have you looked at other serverless platforms? seems like you're working really hard to squeeze through some pretty specific limits that are probably AWS-specific
#
[eddie]
I have, all serverless functions have the same limits
#
[eddie]
Well at least Azure, Google and AWS
#
snarfed
5MB request size?
#
[eddie]
It's a conspiracy! haha
#
[eddie]
there's a little variability but all under 10mb
#
[eddie]
hmm ibm does have a cloud...
#
snarfed
there are a number of other smaller ones too
#
Zegnat
What about CloudFlare’s thing? Didn’t they launch a thing that could intercept requests to URLs you put them infront of?
#
sknebel
costs $5 month though
#
Zegnat
I was thinking CF should be able to handle arbitrary sized requests, their business being proxying and all
#
sknebel
I'm half expecting snarfed to sing the praises of good old appEngine, but I suspect that has some limit too?
#
snarfed
sknebel++ lol
#
Loqi
sknebel has 43 karma in this channel over the last year (110 in all channels)
#
snarfed
...but hmm now that you mention it...
#
snarfed
yeah google cloud functions's limit is 10MB. https://cloud.google.com/functions/quotas#resource_limits
#
@jgmac1106
↩️ if only Mastodon supported webmentions. That would make the coolest interop workflow for comments. In fact I turn off native comments and just use webmentions. Wish I could for Mastodon (https://quickthoughts.jgregorymcverry.com/s/2bE17f)
(twitter.com/_/status/1115685322053115904)
#
[eddie]
oh wow!
#
[eddie]
yeah 32mb IS much better
#
aaronpk
Use app engine to upload to S3? Lol
#
[eddie]
haha yep
#
aaronpk
32mb ought to be enough for anyone ;-)
#
aaronpk
I do wonder about a more resilient solution for really big uploads like long videos tho
#
[eddie]
Yeah, i think 32mb would work, as you said for casual use
#
aaronpk
A single POST request is always going to be limiting
#
[eddie]
but any limit does slightly concern me
#
sknebel
I think there was an open spec for incremental uploads we looked at a while back?
#
aaronpk
I should go look at the twitter api and other apis that handle large chunked uploads
#
aaronpk
maybe there's an opportunity for an extension to media endpoints to support something like that
#
[eddie]
Yeah that would be pretty cool
#
[eddie]
definitely be helpful as the spec grows where you might want to upload large videos or audio files
#
aaronpk
Hm how does micro.blog handle mp3 uploads for podcasts? IIRC the limit is pretty large
snarfed joined the channel
#
[eddie]
Good question. Most people use either Wavelength or the web interface to upload
[grantcodes] joined the channel
#
[eddie]
Wavelength DOES use Micropub but is locked to Micro.blog for uploads right now
#
[grantcodes]
So a non "serverless" option is out of the question? Feels like it would be pretty easy to write a something non serverless to upload to s3
#
[eddie]
it does take a little while to upload some audio files to Micro.blog but it works
#
[eddie]
[grantcodes] I could write something non serverless to upload to s3. Just after my DreamHost on-demand instance was unavailable for a day, I'm trying to migrate what I can to serverless
#
[eddie]
ohhh that's kind of cool aaronpk
#
[eddie]
the other benefit of that is if you are having to do rejection or redirection you aren't uploading an entire file first
#
[eddie]
which was another fault that Zegnat brought up with redirection
#
aaronpk
A little more work on both the client and the server but this does provide benefits
#
[eddie]
Yeah. I think two big things are: good support for large files and if you are in bad network conditions you don't have to retry the ENTIRE file which is nice
#
[eddie]
more reliable on mobile clients, etc.
#
[eddie]
plus: serverless support one day 😆
#
Zegnat
upvotes anything anything that saves bytes
#
aaronpk
is on 3G right now and would appreciate that as well
#
@jgmac1106
↩️ anyone can give anyone or anything Karma in the #IndieWeb chat other than that having a url is your reputation. You could use number of webmentions as indicators... But reputation engines get icky and gamed quickly. (https://quickthoughts.jgregorymcverry.com/s/6EgKB)
(twitter.com/_/status/1115691047575797760)
gRegorLove_, snarfed, [davidmead], KartikPrabhu, ingoogni and [kevinmarks] joined the channel
[aaronpk], [eddie], [Rose] and Johan1 joined the channel
#
KartikPrabhu
now that G+ is dead what are people doing with the syndication links?
#
KartikPrabhu
I want to keep them so that my original-of page still works with G+ URLs but I want to indicate on my site and possibly in mf2 that those no longer exist and are not to be trusted
jbove and [schmarty] joined the channel
#
[schmarty]
i have been thinking of this as i want to import my G+ data into my site and indicate that it *was* part of G+
#
[schmarty]
part of me wants to put it in the "See Also" where i put my syndication links, but strikethrough or gray out the link
#
KartikPrabhu
[schmarty]: yes, for the visual display I also thought of striketrhough
#
KartikPrabhu
but automated consumers like mf2 parsers would still pick it up
#
[schmarty]
u-previous-syndication 😂
#
KartikPrabhu
u-dead-syndication
[jgmac1106] joined the channel
#
[jgmac1106]
link to the ID on the wiki
#
[jgmac1106]
for the site death page
#
[schmarty]
jgmac1106: that would not be accurate if the link was marked as u-syndication, since the post content doesn't appear there.
#
[schmarty]
but that's a somewhat reasonable UI thing for human visitors.
snarfed joined the channel
#
[aaronpk]
Wow interesting problem
#
[aaronpk]
I guess I'd probably want to hide the links from public display now
#
[aaronpk]
I should do that with my app.net links too actually...
#
[aaronpk]
Especially since something else is on that domain now so who knows what I'm linking to
#
KartikPrabhu
but how do you mark it up or do you just remove it from the post
#
aaronpk
is there any value to having other people know the old link?
#
aaronpk
my first inclination is to remove it from public display entirely. obvs i'd want to keep storing it internally, but i dont need to link to it publicly anymore.
#
KartikPrabhu
good point. I was thinking the opposite. If people have the G+ link they can still find my post from the original-of page
#
aaronpk
right, but that page doesn't need the links to be visible
#
KartikPrabhu
I guess one stores some list of "dead sites" and compares to that while displaying
#
KartikPrabhu
in my case there is only one I suppose
#
aaronpk
hm, yeah that sounds like the easiest for me
#
aaronpk
i was first thinking i'd need to store them separately from the current syndication URLs but that might be easier to filter it on the fly essentially
#
aaronpk
can anyone else think of a consuming use case that would require that we'd continue to publish the old URLs?
#
aaronpk
i'm trying to think of when I currently consume other ppls syndication URLs, and I think it's primarily for deduping webmentions
#
KartikPrabhu
would there be some brid.gy use-case?
#
KartikPrabhu
can't think of any
#
aaronpk
if there was, would it matter anymore once the URLs are dead?
#
KartikPrabhu
no and it didn't matter before for bridgy publish
#
snarfed
they're nice for data mining, eg indie map. eg if i ever did an indie map recrawl, i'd obviously prefer people to display: none them instead of removing them entirely, to preserve the historical data
#
KartikPrabhu
snarfed: how would dead links help in indie map?
#
snarfed
KartikPrabhu: understanding how many people possed to it when it was alive, via which/how many users or pages, how often, etc
#
KartikPrabhu
aah I see
#
aaronpk
interesting
#
snarfed
you're a scientist, right? for science! :P
#
aaronpk
i'm not sure display:none is good enough tho, because readers and other consumers wouldn't know to hide it
#
aaronpk
but that is an interesting case
#
KartikPrabhu
alos that ^
#
snarfed
do we know of any readers or other consumers that do anything explicit with synd links?
#
aaronpk
they appear in monocle
#
snarfed
empty link text would also work, in readers as well as browsers
#
snarfed
at least for implicit display in readers. maybe not explicit.
#
aaronpk
monocle only uses the parsed mf2 result so doesn't know about the link text
#
aaronpk
it re-links it with the icon corresponding to the domain of the syndication link, or a generic icon if it doesn't know it
#
snarfed
"vive la difference" also applies here. we can make recommendations, but individual people will all do different things. eg i plan to leave mine up. clickers beware.
#
sknebel
only use I can think of is if someone wants to dive into archive.org etc for responses and things like that that weren't backfed
#
aaronpk
hmm another good one!
#
KartikPrabhu
archive.org does not parse G+ afaik
#
KartikPrabhu
but for general POSSE that does seem like a good use-case
#
KartikPrabhu
hmm maybe it didn't crawl my posts
#
KartikPrabhu
i should have told it to or something
#
snarfed
eh i never told it to
#
Loqi
agreed.
#
KartikPrabhu
hmm it has my profile page but not individual post pages then?
#
KartikPrabhu
hmm no it has posts too except the one I am looking at :P
#
gRegorLove
Monocle could replace the G+ icon with a tombstone
snarfed1 joined the channel
#
GWG
Fun discussion
[shaners] joined the channel
#
gRegorLove
I think I'll do the tombstone icon if I have any dead syndication links. I don't think I have any G+ ones though.
[jgmac1106] joined the channel
#
[jgmac1106]
can it be from Oregon Trail?
#
GWG
[jgmac1106]: I miss that game
#
Loqi
misses that game too