eli_oat, renem, [miklb] and [manton] joined the channel
#ludovicchabanthey there! is there a "proper" way to differentiate (using Microformats) between a normal (long form) blog post and a microblogging update? So far I can only come up with: an h-entry that has no e-content or p-summary, or has some but are the same HTML element as the p-name
#LoqiPost Type Discovery specifies an algorithm for determining the type of a post by what properties it has and potentially what value(s) they have, which helps avoid the need for explicit post types that are being abandoned by modern post creation UIs https://indieweb.org/post-type-discovery
#LoqiA note is a post that is typically short unstructured* plain text, written & posted quickly, that has its own permalink page https://indieweb.org/note
#KartikPrabhumostly articles have a p-name and notes don't
#ludovicchabant"note" is a Microblog/Mastodon/Twitter-type microblogging update I assume?
#ludovicchabanton aaronpk's website, microposts have a p-name and e-content (both on the html element containing the status update)
#ludovicchabantso I think in the post-type-discovery page, it would fall into bullet 15 -- where the p-name _is_ a prefix of the content (it's the same actually)
#KartikPrabhuludovicchabant: that p-name e-content on the same element is to stop the mf2 parsers from implying the name property. This has been corrected in the revised spec
#tantek__KartikPrabhu: no it is still useful in PTD
#tantek__hey devs posting to GitHub from their own site (or wanting to) - has anyone thought about, brainstormed, or prototyped how to comment and close an issue?
#ludovicchabantyeah it feels like it implemented a bit of ad-hoc PTD before it was formalized, and then the other function was added when the spec started being drafted or something like that
#ludovicchabantoh well at least now it spits out the info I want :)
#ludovicchabantnow I have to figure out how to extract the images in the note... feels like I might have to match parsed entries with their raw html element counterpart and fish for `img` tags
#ludovicchabantlooks like adding this on the note's pictures makes PTD (or at least mf2util's implementation) change the interpreted type to "photo" post instead of "note"... which isn't what I want (and possibly isn't correct?)
#ludovicchabanterrr what do you mean photo property in the parsed mf2?
#KartikPrabhuif there is a u-photo in the h-entry, it will show up a as "photo" property for the "h-entry"
#KartikPrabhuand yes PTD will mean its type is a "photo" post
#tantek.comedited /GitHub (+245) "/* Features */ reply on an issue can also at the same time close open issue or re-open closed issue, lock/unlock" (view diff)
#tantek.comedited /issue (+725) "/* Brainstorming */ Close and re-open issues when commenting, Lock and unlock issues" (view diff)
#ludovicchabanttantek__ : regarding "Do we need an additional property for a reply?", it's interesting because I'm looking at some of my articles on my website, and I have some long-form articles that comment on/reply to other articles, and those are technically "reply" posts, but there's also note replies, i.e. twitter-like replies to other twitter-like (note) posts
#KartikPrabhuludovicchabant: those all count as replies
#ludovicchabantstill, it would be nice to be able to separate from "replies that are notes" and "replies that are articles"
#ludovicchabantespecially for POSSEing (which is what I'm implementing)
#KartikPrabhumaybe, but that is completely different from tantek__'s close and comment stuff
#ludovicchabantthe former would be posted in their entirety to twitter/mastodon/etc while the latter would just be "{title} {link}"
#ludovicchabantsure, but there's just that comment about "additional property for a reply"
#ludovicchabantyes ignore me -- I was just thinking out loud :)
#tantek.comedited /issue (+63) "/* Close and re-open issues when commenting */ clarify question about additional reply - specifically for closing/re-opening" (view diff)
cweiske and leg joined the channel
#tantek.comedited /issue (+407) "/* Close and re-open issues when commenting */ Plain text close re-open thoughts" (view diff)
#tantek.comedited /issue (+162) "/* Close and re-open issues when commenting */ perhaps model issue closing/re-opening orthogonally, then allow for it to be incuded in a reply" (view diff)
#tantek.comedited /issue (+140) "/* Plain text close re-open */ example of "Closing issue."" (view diff)
#sknebelI noticed earlier that indiewebify.me for some reason could verify rel=me for twitter. do I misremember and that was never broken, or did they do something so that sometimes they do deliver the useful version of their pages (e.g. user-agent depenendent?)
#ZegnatThey did a thing where pages would not load at all without the cookies from their initial JS-only redirect page (or something of that kind) making it impossible for tools like curl to get at the HTML of a given URL
#ZegnatPossibly they have come to their senses and are actually serving HTML on URLs again
#sknebelI can't quite figure it out, but something has gotten better then
#sknebelbrowser devtools give conflicting info, but curl finds a rel="me" link
eli_oat, [wiobyrne] and [jgarber] joined the channel
#Loqi[aaronpk] #83 <br> tags are not interpreted as whitespace when converting HTML to plaintext
#Zegnat[jgarber], we’d really like to solve this in the mf2 spec, so all parsers can do the same. Stripping <br> tags completely is currently correct behaviour per spec.
#[jgarber]zegnat: Ah, yeah. There’s definitely some inconsistencies between the spec and the various parsers.
#[jgarber]zegnat: I’ll give that issue a read! Thanks for the link.
#ZegnatI think the inconsistencies stem from the fact that everyone understands textContent to not be sufficient. Sadly HTML’s innerText implementation is very much dependent on CSS rendering stuff. So we have to try and find some middle road to standardise on
#[jgarber]I was about to suggest Markdown’s whitespace handling as a source of inspiration, then I dug into the docs and found:
#ZegnatYou probably want to steer clear of that one. Also, you’d need reverse-markdown-whitespace-handling. Where do you break between paragraphs? How do you lay out lists?
#ZegnatI keep thinking someone must have figured this out already, and I keep coming up blank. (Well, there are HTML-to-Markdown scripts, but I find little documentation on whitespace. And they often only support subsets of HTML.)
#[jgarber]Right, right. There are n Markdown specs out there and the most official one is John Gruber’s, but that hasn’t stopped the community from creating forks and adding/changing functionality.
#[jgarber]So yeah, it’s a crapshoot on the HTML => Markdown conversion.
#[jgarber]Because it’s early and I’m just starting in on coffee: how useful is the plaintext serialization of HTML content?
#sknebelwhich one? innerText, which we suggest right now, is horrible to implement, since it depends on CSS etc
#ZegnatI think the idea is that HTML filtering is hard, so if we can get serialisation for plain text (mf2 p- properties) people are given a standard way to include comments and other content from third parties without worrying about XSS and other things
#ZegnatSometimes you just want to know “what does X say” rather than “what HTML is used by X to express it” (which would be mf2 e- properties)
#ZegnatThere is also the use-case for POSSE: you want to copy the contents of a thing to a place (e.g. Twitter) where rich HTML content is not allowed.
#[jgarber]In all of these use cases, it seems likely that whitespace _could_ be significant and the intention of the author for clarity, style, etc.
#skippyreplace all non-newline whitespace characters with " " ?
#Zegnatwhitespace is kept the way you author it in your HTML (default by textContent). The problem is that `<p>Wow</p><p>Yes</p>` doesn’t include any whitespace and thus turns into one word: `WowYes`.
#ZegnatWhat I’d actually love to do is sit down with someone who understands the dense CSS spec, and rewrite HTML’s innerText to not require CSS knowledge. Right now I have trouble grogging some of the stuff, example step 3’s explanation of select dropdowns: https://html.spec.whatwg.org/multipage/dom.html#inner-text-collection-steps
#[jgarber]Unrelated question: Is it helpful for wiki pages for services like /Heroku to list pricing tiers? That seems like a moving target and I’d imagine the wiki would get out-of-sync fairly regularly.
#ZegnatI think we have done some price information about different TLDs, to help people getting started on a domain of their own. But I wouldn’t do per-service or per-page pricing info unless you sign-up for keeping it updated
#sknebelI personally like having a note about free tiers, but other than that agreed
#[jgarber]Okay, so a generalized “Pricing” section on pages like Heroku, etc. would be useful with links off to the service’s website for details on paid tiers?
#sknebelnot necessary imho, pricing is generally easy to discover from their homepages. I guess maybe if there's oddities to how they bill stuff that need explaining
[wiobyrne] and [snarfed] joined the channel
#[snarfed]re whitespace, if it helps, bridgy publish currently converts html to text for twitter/flickr, and to markdown for github, and works hard to get the whitespace right. feel free to try it out use it as an example
#ZegnatI’ll have a look :) I don’t like a spec addition that says “do what application Y does”. That’s not helpful for encouraging plurality and multiple implementations
#snarfedoh of course, i'd never propose that. just mentioned it as an example existing implementation
TripFandango, [jgmac1106] and [kevinmarks] joined the channel
#[kevinmarks]Html2text was originally aaronsw's contribution to markdown
jackjamieson, KartikPrabhu, snarfed, eli_oat and [manton] joined the channel
#[manton]Any best practices on HTML/JSON response from a Micropub media endpoint? Currently I just return the Location header, but there are some tools (such as Workflow on iOS) where it would be useful to also have the URL in the JSON body.
#aaronpkhm yeah a recommended response body would be useful
#aaronpklet me see if there's any notes on that so far
#aaronpkah there's the experimental "query the last thing that was uploaded to the media endpoint" feature
#[manton]Thanks. The spec says "The response body is left undefined" but if other tools are already returning some JSON or HTML, I think I'll copy that as a convenience.
#myfreeweb[manton]: for photos, my media endpoint returns a big blob of json that should be inserted into the photos/whatever property. it contains... a lot :) links to both jpeg/png and webp versions + blurry tiny webp preview to display while loading + metadata parsed from exif + a color palette :D
#snarfedso aaronpk i was thinking through some auth details last night for third party services that aren't part of a site itself, like aperture and maybe eventually baffle
#snarfedi get the auth flow in general, and i get that the service would use an individual site's token endpoint to verify its Bearer token(s)
#snarfed...*but* the service needs to know which site a given request/token is for, so it can discover the token endpoint in the first place, right?
#snarfedso external microsub/micropub/etc services would generally need unique endpoints per site, with the site's domain or user id (like aperture does) or something baked in, so they know which site each request is for, right?
#aaronpkyeah that's why I have a unique microsub endpoint per user in aperture
#aaronpkaperture goes and queries your own token endpoint to find out if the token is valid, since aperture wasn't the one that issued those tokens
#snarfedand i'm guessing you used user id instead of domain so that people could switch domains and keep their subscriptions? or was that not intentional?
#aaronpkindieauth IDs aren't limited to domain names, some of them are URLs with paths, and I didn't want to deal with escaping those to use in the microsub endpoint URL
#aaronpkI was going to just do the JSON one but then I didn't know how to parse JSON from Workflow so I made the text response too
#snarfedhey GWG when you get a chance, would you mind pushing out a new version of the bridgy wordpress plugin to remove facebook? people are still (trying to) sign up for bridgy facebook with it
#snarfedi know you're doing a bigger redesign, but i expect that will take a while
#dgoldI just realised that when manton said he was setting up m.b. for indieauth he meant as a _provider_ of token/auth
#[cleverdevil]Yup! Huge influx of Indieauth install base now 🙂
#skippyaaronpk: reading "quoted rewteets" in Monocle is weird, because the original tweet is not displayed. Is it possible to fetch the quoted tweets for display in Monocle? And is this ultimately a Granary, Aperture, or Monocle issue?
#snarfedskippy: is this granary's html output or atom? i know quote tweets render ok (fully) in atom at least. example in newsblur: https://snarfed.org/quote.png
#GWGIt will be, but the rest API is a gigantic change that I can't do in smaller pieces
[cjwillcock] joined the channel
#snarfedsure! that makes sense. but it sounded like you were doing other changes and refactoring along with that (besides media endpoint). hopefully those can at least be pulled out
#GWGsnarfed, the refactoring was necessary as part of the REST API stuff. I had to change how it worked.
#GWGSo, the authentication had to be separate from the endpoint code.