#[tantek]Yeah vocab validation is likely a waste of time and possibly going to cause you trouble when people start using / publishing new terms and or old terms in new ways
#[tantek]Vocabulary linting could be useful especially if you check for common (as in by evidence) errors
eli_oat and [cleverdevil] joined the channel
#[cleverdevil]As a validator for my own micropub endpoint, I think it still may be worth it.
#[cleverdevil]And people are free to start using new terms, it won't break validation 🙂
eli_oat joined the channel
#[cleverdevil]Old terms in new ways? I should likely greatly loosen my schema in that case.
snarfed joined the channel
#aaronpkOf course I forgot tovun the schema update sorry eli_oat
snarfed, tantek, [miklb], cweiske, [tantek] and jeremycherfas joined the channel
#loqi.meedited /via (+75) "/* Zegnat added "giving-credit" to "See Also" */ new section" (view diff)
#loqi.meedited /giving-credit (+193) "Zegnat added "[http://curatorscode.org/ The Curator’s Code] tried to get people to “attribute discovery” and standardise on hat-tip and via as terms. (With unicode symbols ↬ and ᔥ respectively.)" to "See Also"" (view diff)
#aaronpkdot-separators in the channel IDs could definitely work, clients could recognize that and group them based on that, and clients that don't understand it would just treat them as opaque strings still
#aaronpk1) so that my server is actually an archive of the things i've read, and 2) so that all the media is hosted at https URLs
#aaronpkcweiske: hm, previously I said that channel uids can be any URL-safe character, which means there aren't a lot of good options left for a separator
#aaronpkI was thinking that some server might want to use a URL as the channel ID, but maybe that isn't necessary?
#[kevinmarks]you can put preload=none on the <video> tags
eli_oat joined the channel
#[kevinmarks]hm, I'm not seeing video, but my bandwidth might be bad
#[cleverdevil]Thanks for the feedback [kevinmarks], I fixed that issue.
#[cleverdevil]Also, thanks for pointing out the test suite for microformats.
#[cleverdevil]I wrote up a quick script last night that walks through and runs all of the samples through the validator, and its results are currently:
#[cleverdevil]One challenge I did see that surfaced in the test suite for the way I've done the schema is when parts of the document are *multiple* types at the same time.
#aaronpkfeels kind of like markup generated for the sake of markup
#ZegnatAh. Hmm. I think I have seen some feature about using/extending multiple formats. If that allows you to mix the h-event and h-card schemes inside the schema you would be able to validate it.
#ZegnatBut I don’t think the JSON Schema is actually flexible enough for that ... so you might have to hardcode every possible combination :(
#[cleverdevil]Seems like it would be much smarter to have the h-card embedded in the h-event 😕
#ZegnatI am with aaronpk there. While it is valid mf2, it doesn’t make sense to me in the slightest.
#[cleverdevil](FWIW, I think I'm fine just straight up rejecting this sort of content...)
#[cleverdevil](Or, perhaps adding a case to accept it without vocabulary validation)
#ZegnatEspecially with mf2 not having vocabs, no way to know which properties someone wanted to go on the card and which on the event. Or maybe duplicate data? No clue how this is supposed to work out.
#ZegnatThat education example does pass against my schema btw. Which shows you why you might want vocab aware validation on your server, in case you want to stop stuff like that from being stored.
#LoqiA resumé or curriculum vitae (CV) is a document that represents a person's background and skills, commonly used to secure employment https://indieweb.org/resume
#ZegnatI think bear (ping) mentioned having a huge archive of HTML and parsed microformats? Maybe he has some examples to contribute to the tests repo?
#ZegnatIs there an easy way we can query e.g. h-resumes from the indie data snarfed crawled?
#bearI have 3+ years of html and the parsed mf2 for anyone listed in the people's page
#tantekbear, might be good to check for any lingering h-as-*
#tantekand see if we can track down if it's from software or what that needs updating
#bearmy grep for h-resume is chewing thru the files, i'll do h-as-* after
[pfefferle] joined the channel
#[pfefferle]ZenPress and sempress still using h-as
#bearbecause my files only have the home page - a single hit for h-resume: luxagraf.net
[eddie] joined the channel
#[eddie]!tell aaronpk: Before I disappeared the other day, I mentioned my webmention notifications are duplicating. Because of salmention and because I literally just forward every webmention my webhook recieves into the micropub API for that microsub channel. I guess probably what I need to do is just append the source url of any post received and sent into a file, and then check for that url in the file and if it’s found don’t send it again in the
#[cleverdevil]The only tests that fail now are ones that we agreed need to be removed or altered 🙂
#bearok, so according to my data, the following sites all use some form of h-as-* ['mlncn.withknown.com', 'acegiak.net', 'boffosocko.com', 'dym.cx', 'notizblog.org', 'david.shanske.com', 'www.tombruning.com', 'raretrack.uk', 'achangeiscoming.net', 'tombruning.com', 'kartikprabhu.com', 'unrelenting.technology', 'caseorganic.com', 'jeena.net', 'www.ashersilberman.com', 'www.marcus-povey.co.uk', 'marcus-povey.co.uk', 'glennjones.net', 'tantek.com', 'j4y.co',
#aaronpkdgold: yeah if you re-send them to the webmention.io endpoint it will store them for you again. of course the site has to still be online for the webmention to validate so any webmentions from sites that have gone down will not be transferred to webmention.io
#[cleverdevil]I need to find some good documentation for it before I take a crack.
#aaronpkdgold: actually if you are sending the webmentions yourself you can just send them to the webmention.io endpoint before changing your advertised endpoint
#LoqiIt looks like we don't have a page for "microformats validation" yet. Would you like to create it? (Or just say "microformats validation is ____", a sentence describing the term)
#LoqiIt looks like we don't have a page for "json schema" yet. Would you like to create it? (Or just say "json schema is ____", a sentence describing the term)
#[cleverdevil]My understanding from what I am reading is that `rel-urls` would primarily show up in MF2 JSON as the result of parsing HTML, not created by a Micropub client.
#[cleverdevil]microformats validation is the process of inspecting a document that is marked up with microformats2 to see if it complies with the specification for the format, and potentially with the known vocabularies... {{cleverdevil}} is working on an attempt to make this easier here https://github.com/cleverdevil/microformats2
#dgoldsknebel: I have one of the tokens - the one that's in settings
#dgoldbut schmarty's Morris docs has the following:-
#dgoldWMIO_WEBHOOK_TOKEN - a secret string chosen by you that webmention.io will include in webmentions in order to verify that they came from webmention.io
#dgoldahhhhhh - I haven't received any webmentions yet.
#[cleverdevil]I suppose there should be a way for the Microsub server to ping the frontend/cms, and have it generate the HTML for the published content, and then redirect there.
#[eddie]I’ve had the Facebook timeline feed in a general “Facebook” channel for awhile (so it’s already fetched a bunch of stuff). I added the same feed url to a Family Facebook after adding my required keywords as my Family member’s names and bingo! It’s literally just a feed of my family’s stuff
#aaronpkwait how does that work? it doesnt filter the nemes!
#aaronpkthe nice thing is I can play around with this in aperture without having it be part of the spec, and all the clients will still work just fine!
#[eddie]Yeah definitely, I think that’s a huge benefit of the separation created in Microsub.
#[eddie]The Facebook Atom feed puts the words “X shared X” when someone reposts something on Facebook. So on my overall Facebook feed, I just added to exclude the word “Shared”. There might be some false positive posts lost where someone actually uses the word shared in their post, but it’s worth it since Facebook is really just for me to skim through occasionally
#LoqiIt looks like we don't have a page for "collector.githubapp.com" yet. Would you like to create it? (Or just say "collector.githubapp.com is ____", a sentence describing the term)