LoqiWeb hosting can be the primary regular cost in maintaining an IndieWeb site; this page lists several options from free on up depending on your publishing needs, like a static, shared, private, or dedicated server https://indieweb.org/hosting
capjamesgI used to build my Jekyll site on local and then FTP into the digitalocean server I used. But that's not efficient. GitHub actions has been great.
mgdmI'm interested to hear what you come up with. Currently I have a VM and deploying is a manual `git pull && zola build`. I'd quite like Actions to just do the whole thing for me
[Murray]How much back-end do you need? I'm using Netlify which autodeploys on push to GitHub, but also has a webhook that I can trigger directly. I've got buttons in my CMS and (I used to have) a button on a reader that I used that both trigger redeploys.
mgdmI could shift the whole thing to something like netflify except that I like having a VM to do odds and ends (like a Gemini server on the same hostname)
jeremycherfasThere is a little PHP script that pulls origin/master when triggered. There is no specific build process because Grav is dynamic, but heavily cached and served from markdown files.
[Murray]yeah, hence the question of "how much back-end" 😄 (I have a server too, it's just decoupled from my site and runs stuff that needs PHP etc.). But for something like a webmention receiver/parser, that you could do in a serverless function via something like Netlify, or just via a Github Action. It depends if you want the mentions to render between builds or not?
capjamesgRight. So, say I have implemented the validation. What is the content of the webmention? Should I be parsing the file w/ microformats and using the e-content h-entry attribute?
aaronpkThat means for example if it's a JSON document look for whatever a "link" would be in JSON, like look for the url as a value of some property. It's not really wel defined for anything other than HTML but we didn't want to limit webmention to only HTML pages
barnabyas for which elements, it’s up to you. I think most people just make sure that there’s a visible link somewhere on the page which matches the target URL
barnabythat suffices for validation. Then when it comes to presentation, you might want to do additional parsing to determine the relation between the source and target, e.g. reply, like-of, mention etc
capjamesgI could parse with bs4 and look for a link. Or I could just look for links with the u-in-reply-to class (which I don't think fits the spec though).
barnabyGWG: we were just discussing the per-type parsing requirement. are you aware of content types other than HTML which are actively used and should be parsed?
[snarfed]but barnaby was asking about other _content types_, not just other parts of HTML. afaik we’ve seen discussion of non-HTML wms but no real attempts or experiments in the wild yet
GWG[snarfed]: WordPress does that... I know I submitted a PR to change it to use DOMDocument and only use text as a fallback but I forget if we merged it
barnabycapjamesg: the only case in which it makes sense to search embedded content for webmention links is if you’ve configured your web server to serve images, videos etc with an HTTP link header pointing to your webmention endpoint
barnabyI’m not aware of anyone who does it. in theory it’s interesting for getting notifications of webmention-enabled sites which are hotlinking your media
Loqi[Zegnat] I think that would be worth mentioning in the spec then. From that table, it seems the following take URLs:
| Attribute | Elements |
| --- | --- |
| `action` | `form` |
| `cite` | `blockquote`, `del`, `ins`, `q` |
| `data` | `object` |
| `for...
capjamesgGWG I'm not quite ready yet as I'm still ironing out a few issues. I'll open source my code when I'm happy with it so others can use it (as long as I get it compliant).
barnabyit only knows how to parse mentions into likes, reposts, replies or plain mentions, and displays them all in chronological order rather than grouping them, e.g. https://waterpigs.co.uk/notes/5D9NcJ/
sknebel(e.g. it does one of the private webmention attempts, and does actually dump more data to storage than I thought it did. and has a terrible, terrible hack for supporting homepage webmentions. and it'll actually accept external urls that redirect to my site as target= parameter (which admittedly is a bit iffy))
barnabyaaronpk: I’m currently running my IA server + Micropub adapter through micropub.rocks. Everything went fine so far, but on the GIF media endpoint upload test, the request fails and my server logs this error: “Header name must be an RFC 7230 compatible string.”
ZegnatIt would be interesting to see if we can run just the one test against nyholm/psr7. I am happy to take a look this weekend and get the lib patched if the bug is there.
ZegnatSounds like it might be a problem in psr7-server though. That is the one trying to set empty headers. But it normally gets the headers straight from the server passing things to PHP, so, hmmm
ZegnatInteresting. I think Nyholm might do some Guzzle work. We have had some Guzzle overlap with the psr7 implementation. So if you have a reproducable case, I would still be happy to have a look into fixing it :)
ZegnatInteresting. But for sending files, Guzzle is just using PHP’s curl bindings, right? Do you get the same if you use curl directly? Or is that too hard to test?
barnabyI replaced nyholm/psr7 with guzzlehttp/psr7 and the problem went away, so it’s a bug which happens specifically when using the guzzle client to upload an animated gif to a server using nyholm/psr7
barnabythe mysterious whitespace header which guzzle/curl is sending shows up in both $_SERVER and apache_get_headers, but it looks like nyholm/psr7 is assuming that the headers returned from apache_get_headers() will be reliable
ZegnatWe have had bug reports before for the wrong package, so just want to understand if nyholm/psr7-server is doing the wrong thing, or if nyholm/psr7 is doing the wrong thing.
barnabyZegnat: I’ve been experimenting in a minimal test case file where I only load the exact libraries I want to test. I’ve been comparing using psr7-server, and using guzzlehttp’s ServerRequest::fromGlobals() by itself
Zegnatpsr7-server is a generic request to psr-7 implementation. So you could also use it to create instances of other psr-7 implementations. That is why I wanted to be more specific :)
ZegnatAs far as I know, getallheaders is the most reliable thing offered by PHP, so I would need to check how guzzle solves this then. Chances are they are just less fuzzy about following spec
barnabyyeah, the root of the problem is clearly that the blank header exists at all, and whether or not server implementations should throw exceptions or ignore it silently is a different matter
ZegnatYeah. I will have to give it a little more thought. I do not think we will patch nyholm/psr7 to not throw. That one expects you to be building a correct and valid HTTP object. But I can see there possibly being a case for psr7-server being more lenient and just not pass on completely empty headers.
barnabyokay, upload from command line curl doesn’t include the blank header! so looks like the guzzle client is the best place to file an issue, for now