#aaronpkI thought the tests repo was just the data
#willnorrisI'm also not wild about the fact that there are now tests for at least some proposed (but not yet accepted) changes to mf2. The recent one I ran across was using video.poster as an implied u- property
#gRegorLoveI'm pretty sure the tests repo is the output of glennjones parser (node? mf-shiv? same thing? unclear to me)
#willnorrisa library adding support for these proposed changes is one thing, but I don't think they should be captured in microformats/tests until they're formally accepted (whatever that process looks like)
#aaronpkoh, I thought the .json files represented the canonical expected output of any parser
#willnorrisyes, that's true, but how do you decide what that canonical expected output should be?
#willnorristoday, it's basically whatever the nodejs or shiv implementation outputs
#gRegorLoveBased on inspecting one test, I'm pretty sure the json is from MF Node parser
#willnorristhe flip side of all this of course, is that unless you actually have multiple implementations regularly running the test suite, the current situation is the only outcome possible
#aaronpki thought the mf2 parsing algorithm described a canonical output
#willnorrisdoing value-class-pattern on datetime values is also a bit all over the place. Part of that is from some of the -issues stuff not getting integrated into the core spec pages, which tantek has since fixed
#willnorriswell, "all over the place" from my limited perspective. It may actually be completely fine, but the tests were actually ahead of the spec for a while
#willnorrisyes, that one was definitely a bug in the tests
#gRegorLovemicroformat node "is the same codebase as microformat-shiv project, but used the ultra-fast HTML DOM cheerio."
#tantekgenuinely glad to have you around these parts again willnorris
#willnorristantek: thanks. happy to be back (for a time, anyway :) )
#tantekwillnorris - re: formally accepted mf2 parser issues, I look for two things, rough consensus among the discussion of an issue, and then at least one parser implementation of it (i.e. someone demonstrates the resolution is implementable)
#tantekthat being said, I very well may be behind on incorporating some resolved mf2 parser issue into the spec
#tantekwhen you find specific instances of that, please raise them to my attention. really appreciated.
#willnorristantek: that sounds completely reasonable. My concern is more when things hat do not yet have consensus or are not yet integrated into the spec make it into the test suite. Then other implementations have to added support for what may be experimental features if they want to continue passing the test suite. It's basically the vendor prefix problem
#tantekyikes hopefully not as bad as vendor prefix problem
#willnorriswill do. I've still got a bit more work to do to get the go library fully passing, so may find more of these
#willnorris(no probably not, but I knew that would make my point)
#tantekI believe glennjones has been updating the test suite in sync with his microformats node implementation which means when he implements a resolved issue, there is a time when the test suite may be ahead of the spec
#willnorrisif these were tests that were just for the node implementation, then I wouldn't care. But they're presented as THE test suite which all implemtnations should seek to pass
#willnorrisit's unclear to someone like me who comes along and just sees that the tests don't match the spec. it's very difficult for me to know if that's because the tests are wrong, or because the spec is just lagging a little bit
#tantekthey were just the node implementation, but since they are so far ahead of any other microformats test suite attempt, they are becoming the defacto canonical test suite
#willnorrisah, interesting. I didn't know that histry
#tantekwillnorris: understood about the unclear / difficult to know aspect
#tantekwe had similar challenges with CSS1 and the CSS1 Test Suite and browsers :)
#tantekthis is part of the reality of polishing a standard
#tantekthere is some time lag between such different piees of the puzzle
#tantekand that time lag is preferable to strict bureaucracy to keep everything in sync which would actually slow *everything* down, and thus ironically contribute to more legacy problems
#tantekI think in "data" circles this is called "eventual consistency" or something?
#willnorrisfair point. And I certainly don't mean to sound ungrateful. The fact that this test suite exists *at all* is amazing, and way better than not having anything
#tantekyup - no ungratefulness taken :) the questions you have a perfectly reasonable
#aaronpkhere's an example of a receiver test for webmention.rocks where i'm not sure the best way to test for this: "Verifies that target is a valid resource for which the receiver accepts Webmentions"
#aaronpkhow do I know what resources the receiver accepts webmentions for? I could assume that no legitimate receiver would accept webmentions for example.com but that isn't necessarily true
#tantekthat is, you send them a target of their domain with some total and utter nonsense
#[kevinmarks]If we can make the parsers update with warnings when tests change
#aaronpki might want to accept webmentions of URLs on my domain that don't necessarily correspond to a file or post on my site though, just to see what people are linking to
#tantekaaronpk, you may also be running into a need to explicitly specify the difference in implementation class(es) between "receivers" and "proxy receivers"
#tantekGWG, lots of debate on that one, up to the site
#aaronpkthat's a good point, that might make this more obvious
#tantek"Verifies that target is a valid resource for which the receiver accepts Webmentions"
#tantekthat's the whole reason we're having this discussion
#aaronpkright. that means the receiver decides what resources it accepts.
#aaronpkwebmention.io decides to accept all resources
#tantekproxy receivers have no way of complying with "Verifies that target is a valid resource for which the receiver accepts Webmentions"
#aaronpkthere's also this in the spec: "A Webmention Receiver is an implementation that receives Webmentions to one or more target URLs on which the Receiver's Webmention endpoint is advertised."
#aaronpkis beginning to think that tonight's task is going to be to simply document all the tests required rather than implement any of them
#tanteki.e. the spec doesn't say "A Webmention Receiver is an implementation that MUST reject Webmentions to any target URLs on which the Receiver's Webmention endpoint is NOT advertised."
#aaronpkit actually does: "The receiver SHOULD check that target is a valid resource for which it can accept Webmentions."
#aaronpkbut there's no way to test it except testing the inverse
#aaronpkthe point of the implementation report is to have a checkbox for every part of the spec that means something. so i have a checkbox for that sentence. but i can't make a test to test that sentence without testing the inverse of it.
#aaronpkthe next interesting one is "it must perform an HTTP GET request on source, and follow any HTTP redirects (up to a self-imposed limit such as 20)"
#aaronpki agree it's kind of misleading to have the text say " it must perform an HTTP GET request on source" when it actually is fine to do HEAD first
#bearthe point for me is this -- since the purpose of the request is to verify that the resource exists, HEAD is the proper call
#bearlater in the process when the HTML is required, then a GET can be performed
#[kylewm]I have a feeling there are all sorts of nasty edge cases with representative h-entry and authorship when you get into parsing from fragment identifiers
#Saltsure, but trying to suit both, just that the validator isn't picking up a couple of tags, pin13 does, but not the other
#tantekit was optimistically added to the HTML5 spec years ago (along with a bunch of new semantic elements) but browsers etc. never did anything with it
#Saltkinda hack and slashing to get this rsvp out, I have ~10 95% finished posts that I should edit and put up first, but I want to get this in incase slots fill up
#Saltlooking forward to refining this stuff and spending the hack day working on more middleman incorporation... I really should try to find the time to upgrade to v4 before doing too much hacking on it
#Saltso I did the ti.to registration but as an RSVP, though it seems to have ticked off one of the via ti.to only instead of RSVP slots...
#SaltI need a new avatar anyway, that one has way less hair than is currently the case :P
#bearsalt - reading the scroll back my only thought/suggestion is that content-encoding is not the same as content-type so anything processing your site for mf2 will not automatically gunzip it
#gRegorLoveThat's what aaronpk does, though he might find a simpler example: <a class="u-author" href="/"></a>
#Saltbear, sec, will reup and fix the author issue, see if it does parse
#aaronpkyeah i do that but my h-card is down at the bottom of my site
#SaltgRegorLove, just adding u-author to my header link
#aaronpkthe trick is you'll need to make sure there's an "author" property *inside* the h-entry, but that only needs to link to your home page which is where your h-card is
#Saltwolftune, I will be moving things off of AWS eventually, part of why I don't have analytics is no consistent server to run piwik
#bearSalt and also, are you using s3cmd to upload to S3? if not then you may have to set the attributes on the file so that S3/CloudFront know how to return the content
#tantekI realized that's one of the antipatterns that drives people into semweb astronomy
#tantekaaronpk: this would actually be a good thing to test in webmention.rocks - that the requester explicitly does a HEAD request FIRST, and then two subtests: one where the HEAD has the link header and then the sender sends a webmention WITHOUT doing a GET, and another where the HEAD lacks the link header, and the sender does a GET for endpoint discovery.
#tantekthe thing is, if the test suite has it, implementations will almost certainly all do it, because test suites in practice tend to drive implementation details MUCH more than any spec prose details
#tantek(where the test suite explicitliy tested some features not documented in CSS1, everyone implemented them - interoperably! - and then the CSS1 spec had to be updated accordingly)
#tantek(and it took like a year for folks to discover the spec/test discrepancy)
j12t, wolftune and snarfed joined the channel
#calumryanIWC t-shirt came today - did have a customs declaration but seemed to have come from inside EU - Germany
#ben_thatmustbemesnarfed: i don't know if you saw, but at the last IWC @ MIT I made my site save offline posts made on my web client , but it doesn't cache the site
#tanteku-uid is a microformats2 property for representing a unique identifier for the item, e.g. an unchanging permalink, like a u-url but without any user-authored parts like a slug.
#tantekalso on your home page, since it's primarily a feed, you could add <body class="home h-feed">
#tantekscottgruber, interesting, in your RSS feed you have <title>Scott Gruber's Blog</title>, however no equivalent on your home page
#[scottgruber]I need to add canonical urls and fix up the header html, setting up my building blocks as part of my indieweb goals.
#tantekyou could add an <h1 class="p-name">Scott Gruber's Blog</h1> on your home page
#[scottgruber]i have a [blog](https://scottgruber.me/blog) that has the rss that I removed from my nav but I left rss in head. Is it ok to have multiple rss feeds, 1 for articles rss and one for notes rss and one for blog (if I publish it.)
#tantek.comedited /like (+214) "/* IndieWeb Examples */ comment out sandeep and benwerd as their permalinks either 404 or lack like markup respectively" (view diff)
#snarfed(bridgy was actually backfeeding as early as 2012, just not via mf2/wm)
#ben_thatmustbemedarn... i was going to run my new client entirely off of github pages, but i can't login from JS because of CORS, and I can't host the php i need on github.io
#KartikPrabhutantek: I write articles in HTML and so had to go an update all older posts by hand when adding say "u-featured"
#aaronpknot sure why that one didn't import right, but in yaml, "yes" is an alias for "true", and that post had "true" stored for the rsvp value instead of the string value "yes"
#aaronpkso because true != "yes", my code didn't add the rsvp markup
#tanteksees an advantage to "going to" in the start of the prose text of a reply meaning "rsvp:yes" ;)
#tantekand with 10+ publishers, and 5+ consumers, p-rsvp is so stable it would be harder to change it than not change it, so it just got promoted to core h-entry
#tantekwe have more different documented RSVPs than likes, reposts, or photos
#gRegorLoveMostly because I'm creating a log of events I've attended.
#tantek.comedited /posts (+26) "/* Kinds of Posts */ rsvps way ahead of everything but note/article/reply" (view diff)
#tantekI know GWG has not yet added himself to a lot of the IndieWeb Examples lists
#tantekbut he's been doing likes for a while (pretty sure)
#tantek!tell GWG what's your oldest "like" post, i.e. both semantically (like you menat it as a like, not just a bookmark), and with u-like-of markup? can you add it to https://indiewebcamp.com/like#IndieWeb_Examples
#tantekaaronpk: re: "how to consume an rsvp" not well defined, should it be on its own? or should it be just one part of "how to interpret a valid webmention" ?
#tantek^^^ help ? I fixed a few photos, requires manually looking up the person *somewhere* (e.g. their website, their twitter etc.) and putting in an image from there
#tantekdang we really should have a featured image for IWS
#tanteksince this is the last newsletter that will go out before the summit!!
#sknebelben_thatmustbeme: re your github issue, you can generate an oauth token in your github account and then use that from client-side JS... not that nice a login flow, but works