#[eddie]Guaranteed not to change the http method type on redirect
#[eddie]Pretty excited about this! Inching closer to a final solution!
snarfed joined the channel
#[eddie]Tomorrow I literally just have to generate my S3 bucket signed URLs, put my serverless media endpoint url in my Micropub config and then run a test using both Indigenous for iOS and Quill :crossed_fingers:
#Ruxtonaaronpk: not sure if ths interests you or not, but I wrote some code in OwnYourGram to check instagram_img and instagram_img_list for URL sig failures and update them if they are failing
cweiske joined the channel
#ZegnatHmm, I do wonder how 307 works for mobile devices. Will you burn double the bandwidth or can the server somehow cut off the first upload?
#cweiskedouble, because you could also be redirected to a new server
#ZegnatYeah, that was my guess too. Just wasn’t sure if there was some way for the server to bail out early and stop the client’s upload
#Loqi100 Continue is a status-code you might not deal with very often.
Generally, as a web developer, the 100 Continue status is sent under
the hood by your webserver.
So what’s it for? The best example comes from RFC 7231. Say, you’re
sending a larg...
#cweiske"The big benefit here is that if there’s a problem with the request, a server can immediately respond with an error before the client starts sending the request body."
#ZegnatThe problem in this use-case is that it seems to say you either give a 100 or a 4xx. While the media endpoint wants to provide a 3xx
#cweiskethe client needs to send "excpect: 100-continue" itself
#ZegnatYes, that is what I am saying. And all exampls I am seeing will either return 100 (headers are OK, continue upload) or a 4xx (something is wrong)
#ZegnatHmm, also seeing a few examples of responding with 3xx now. So maybe it’ll work.
gRegorLove, ingoogni1, petermolnar, blueyed, swentel, swentie, iasai and [grantcodes] joined the channel
#[grantcodes]Ha [eddie] I am impressed by your dedication to your serverless setup, seems you're at the point of finding workarounds for workarounds 😛
[grantcodes], [kevinmarks], [Rose] and ingoogni joined the channel
#[grantcodes]Aha so [Rose] it looks like you can't log into together because of an activity streams / fediverse plugin? But I should be able to fix that
#[grantcodes]I assume it is because I have not set up the content type on my http request properly
#ZegnatI can make an HTTP request for rosemaryorchard.com without accept header and I still get HTML back
#ZegnatWhere are you seeing that issue, [grantcodes]?
#[grantcodes][Zegnat] I think it is probably the http lib I am using, I think it tries to set some smart defaults as it is mainly for json
#ZegnatOooh, so the lib may set the accept to JSON?
#ZegnatYep, [Rose]’s website will return JSON for a json Accept header
#Zegnat[Rose]: would you have the ability to add the microsub endpoint to a link HTTP header? I see micropub and webmention already there.
#ZegnatThat may also fix it (no HTML necessary for parsing and finding the endpoint)
#[grantcodes]That is another way to fix it, but still should fix my request somehow
#ZegnatI agree, [grantcodes]. Just at the off-chance [Rose] is around and looking for the quick fix
#Zegnat(As in: faster than waiting for restructure in Together and a new launch to the public hosted version.)
[jgmac1106] joined the channel
#[Rose]I can add the Microsub to my <head>, but I already did
#[grantcodes]Eh, was only 3 lines. It is updated on the hosted together
#[grantcodes]You should be able to log in now [Rose]
#Zegnat[Rose]: I meant to the HTTP Headers, not the HTML <head>.
eli_oat and [eddie] joined the channel
#[eddie][grantcodes] 😆 Workarounds for workarounds sounds pretty accurate. It's gonna make for a fun/interesting blog post when I'm done
#[eddie]The good news is that the Media Endpoint seems like it'll be the hardest thing. Everything else (Micropub, Webmention processing, etc) all seems more suited for what is happening here and should be much easier
#[Rose]Oh, no idea how to do that on WordPress, I can try though.
#[grantcodes][Rose] you shouldn't need to now. But the issue I has with not the microsub endpoint that Zegnat mentioned but the authorization endpoint was missing from the headers. Microsub may be missing too, but I didn't get that far 😛
#ZegnatYes, authorization endpoint is also missing. Sorry for being unclear.
[Zegnat] joined the channel
#[Zegnat][Rose] I would have expected to see microsub / indieauth-related headers there. Maybe worth filing an issue on the WP plugins.
#snarfedbridgy's superfeedr notifications have the oddest pattern. repeats every three days, so it's not a weekly thing. maybe some three day long cycle of batch jobs inside superfeedr, or in wp.com or blogger or tumblr? so weird. https://snarfed.org/bridgy_superfeedr.png
#aaronpkhuh. i should log the ones i'm receiving too
#snarfedhas both good analytics and good individual subscription debugging tools
gRegorLove_ and [kim_landwehr] joined the channel
#[kim_landwehr]When connecting Quil.pko with Blot.im using Heroku trying to figure out the line <link rel="micropub" href="https://deployed-blotpub-app.com/micropub">. Is that all I need to put in or do I change it in any way. Feeling confused and probably over thinking
#ZegnatI do not know what Blot.im is, or how its micropub works. But generally Micropub clients need a website to have the rel="micropub" defined, like the tag you have there, as well as IndieAuth endpoints defined for login to work
[eddie] joined the channel
#[eddie]Blast! I think my Serverless Media Endpoint plans have officially collapsed! S3 supports just uploading a file directly via a PUT request, but a POST request requires some specific form data in the multi-part form data, which obviously no Micropub client is going to do
#[eddie]lol 🤷♂️ so no uploading directly to Amazon s3
[Rose] joined the channel
#[Rose]No chance of a script that "translates" and passes the request on to S3?
#jgmac1106[m]Snarfed what is the IP address of Bridgy I should send to my shared host to try and get bit ninja to unblock it?
#aaronpkYou could make a lambda script that does that
#[eddie]and my media endpoint already has items bigger then 5mb so that's definitely not doable (I also would like to start posting video and audio to my media endpoint which will just grow that lol)
#aaronpkHm how do other services handle this then? Plenty of sites have user generated content >5mb stored on S3
#[eddie]Yeah so it works as long as you control the upload mechanism
#aaronpkIIRC the twitter api supports a kind of chunked upload
#[eddie]typically the workflow is you use the amazon sdk to generate a "signed url" for s3 that provides access
#[eddie]then if you control the client, you can send the file via a PUT to that signed url
#[eddie]but of course, that doesn't match the Micropub Media Endpoint method of POST
#[eddie]the other option for people that want to do POST uploads is they include details about the s3 bucket inside the form they are using to upload the file
#[eddie]again, as long as you are building the app you can send s3 specific details inside that form
#aaronpkI know a bunch of other services are copying the S3 API to support other tools out of the box
#[eddie]Yeah i've seen some stuff like that, like DreamHost has DreamObjects that is an S3 compatible object storage
#jgmac1106[m]I got a warning. Trying to find out of it is slowing me down, just trying to turn over any rock
#aaronpkDoes that approach have any benefits that other micropub servers or clients would benefit from?
#[eddie]the S3 API is pretty good, if they would just allow a POST request sent with just a file to a single url (exactly like their PUT works) everything would be perfect
#jgmac1106[m]Though it may not be necessary seems Known's internal plumbing is picking up majority of comments and likes... Need to figure out how much the two duplicate each other... Known POSSE and Bridgy
leg joined the channel
#[eddie]hmmm thinking about [Rose]'s mention of "translating" I wonder if I could set up a "proxy" somewhere somehow, that literally doesn't use any code but just accepts POST requests for a url and proxies those to s3 as PUT requests. I'm not even sure if you can change http method when proxying
#[eddie]or even much less where I would put the proxy, because if it's on my server and it goes down again then I'm still out of luck, thus defeating the purpose of serverless
#[eddie]I'll be really happy when I get past the Media Endpoint and can move on to the stuff that I know will be easy using Serverless stuff: Micropub, Webmention, Processing and Storage
#[eddie]The other option is I should just open up my s3 bucket to the entire world, and delete things within 24 hours lol
#ZegnatA close-to-the-protocol proxy shouldn’t have any issue with rewriting only the method, I guess? You would literally be rewriting only the first line of the request, and otherwise just byte-for-byte proxy it straight to the second server
#ZegnatWouldn’t even need to write to disk at any time?
#[eddie]Yeah I just saw it seems like nginx's proxy can change the method
#snarfed[eddie]: out of curiosity, have you looked at other serverless platforms? seems like you're working really hard to squeeze through some pretty specific limits that are probably AWS-specific
#[eddie]I have, all serverless functions have the same limits
#[eddie]definitely be helpful as the spec grows where you might want to upload large videos or audio files
#aaronpkHm how does micro.blog handle mp3 uploads for podcasts? IIRC the limit is pretty large
snarfed joined the channel
#[eddie]Good question. Most people use either Wavelength or the web interface to upload
[grantcodes] joined the channel
#[eddie]Wavelength DOES use Micropub but is locked to Micro.blog for uploads right now
#[grantcodes]So a non "serverless" option is out of the question? Feels like it would be pretty easy to write a something non serverless to upload to s3
#[eddie]it does take a little while to upload some audio files to Micro.blog but it works
#[eddie][grantcodes] I could write something non serverless to upload to s3. Just after my DreamHost on-demand instance was unavailable for a day, I'm trying to migrate what I can to serverless
#[eddie]the other benefit of that is if you are having to do rejection or redirection you aren't uploading an entire file first
#[eddie]which was another fault that Zegnat brought up with redirection
#aaronpkA little more work on both the client and the server but this does provide benefits
#[eddie]Yeah. I think two big things are: good support for large files and if you are in bad network conditions you don't have to retry the ENTIRE file which is nice
[aaronpk], [eddie], [Rose] and Johan1 joined the channel
#KartikPrabhu now that G+ is dead what are people doing with the syndication links?
#KartikPrabhuI want to keep them so that my original-of page still works with G+ URLs but I want to indicate on my site and possibly in mf2 that those no longer exist and are not to be trusted
jbove and [schmarty] joined the channel
#[schmarty]i have been thinking of this as i want to import my G+ data into my site and indicate that it *was* part of G+
#[schmarty]part of me wants to put it in the "See Also" where i put my syndication links, but strikethrough or gray out the link
#KartikPrabhu[schmarty]: yes, for the visual display I also thought of striketrhough
#KartikPrabhubut automated consumers like mf2 parsers would still pick it up
#KartikPrabhubut how do you mark it up or do you just remove it from the post
#aaronpkis there any value to having other people know the old link?
#aaronpkmy first inclination is to remove it from public display entirely. obvs i'd want to keep storing it internally, but i dont need to link to it publicly anymore.
#KartikPrabhugood point. I was thinking the opposite. If people have the G+ link they can still find my post from the original-of page
#aaronpkright, but that page doesn't need the links to be visible
#aaronpkhm, yeah that sounds like the easiest for me
#aaronpki was first thinking i'd need to store them separately from the current syndication URLs but that might be easier to filter it on the fly essentially
#aaronpkcan anyone else think of a consuming use case that would require that we'd continue to publish the old URLs?
#aaronpki'm trying to think of when I currently consume other ppls syndication URLs, and I think it's primarily for deduping webmentions
#aaronpkif there was, would it matter anymore once the URLs are dead?
#KartikPrabhuno and it didn't matter before for bridgy publish
#snarfedthey're nice for data mining, eg indie map. eg if i ever did an indie map recrawl, i'd obviously prefer people to display: none them instead of removing them entirely, to preserve the historical data
#KartikPrabhusnarfed: how would dead links help in indie map?
#snarfedKartikPrabhu: understanding how many people possed to it when it was alive, via which/how many users or pages, how often, etc
#snarfed"vive la difference" also applies here. we can make recommendations, but individual people will all do different things. eg i plan to leave mine up. clickers beware.
#sknebelonly use I can think of is if someone wants to dive into archive.org etc for responses and things like that that weren't backfed