[xavierroy], gRegorLove, [jgmac1106], [tantek], amz3, [snarfed], [schmarty], BenLubar, jeremycherfas, KartikPrabhu, vilhalmer, Kaja__, [fluffy], cweiske, dav and leg joined the channel
#@JohnGunson↩️ I'm sitting here thinking two things - one, that's a badassed Coasty jumping on a microsub like that. Two, why does he need camo? Especially green camo? (twitter.com/_/status/1149640761790492673)
demomo[m], [kimberlyhirsh], [KevinMarks], jjuran, [tantek] and [benatwork] joined the channel
#[jgmac1106]<span class="entity-reveal-action button">mentions</span><a class="entity-mention entity-mention-first" href="https://twitter.com/TwitterSupport" style="display: none;">@TwitterSupport</a> for any @ mention though @microdotblog came through
#[schmarty]it's not bad, per se, but it gets a lot of attention despite having very few (2? 3?) implementations in the wild and almost no documented user experience.
[tantek] joined the channel
#[schmarty]it often comes up in the context of webmentions and dealing with spam and abuse and i feel like it tends to stop the conversation.
#[schmarty]vouch may be a useful tool in preventing abuse but if so i think it will be only a part of a larger group of tools and strategies.
#[KevinMarks]It assume the existence of blocklist and showlist functionality which few of us have built yet.
#[tantek]right. is anyone here moderating their mentions / replies backfeed from Twitter (via Bridgy) ?
#klezI'd say it should be noted in the Webmention page where it's linked from (Extensions section) that it's just *one* proposal, as it seems to be *the* proposal.
#[tantek]so early on when Vouch was being developed, there was a lot of thought being put into it and we got some implementation momentum. unfortunately (for reasons nothing to do with indieweb), a couple of the early developers of implementations had other life stuff come up and development / interop stalled a bit since
#[tantek]as KevinMarks pointed out, developing Vouch revealed that there were simpler things that an implementation should do first, e.g. keeping track of who you did link to and automatically add them to an allow list
[kimberlyhirsh], gRegorLove and [snarfed] joined the channel
#[snarfed]tantek: i expect many wordpress users are moderating backfeed. i am. mostly automated w/akismet, but occasionally manually
#Loqialgorithmic feed (AKA algorithm-driven feed or just algorithm feed) is a more correct term for the "algorithmic timeline" lie, and an increasingly common feature on social media silos such as Instagram, Facebook, and Twitter, where they show only some posts from your followings, as well as show some posts only hours or days after they were posted, thus not in chronological order https://indieweb.org/algorithmic_feed
#[jgmac1106]Most WP and many Known webmentions moderated through Askimet
#omz13Are there any statistics for the signal-to-noise (spam/not-spam) for webmentions? How much is a problem is it really?
#[KevinMarks]So far not much; bridged mentions from twitter can be trolling.
#jgmac1106[m]Think less than one can count on one hand. All hypothetical, but rather plan for moderation while the river is low ratjer than build walls for climate change after its too late
#jackyright - outside of someone making a page and just sending webmentions to your page; I haven't seen anything
#jackyI wanted to determine this kind of info when I built my webmention service
#jackyto help provide suggestions for moderations (known to spam over the last month etc)
#omz13Good to hear. I'm soon (famous last words) about to deploy my own webmentions implementation... so far I've been concentrating on catching malformed requests than spam
#GWGI need to work on improving moderation for backfed webmentions
#jgmac1106[m]Askimet does well. Gets some false positives but never misses spam... And never had a spam webmention comment... Some tweets from Bridgy but they don't know they are spamming my site
#jackyGWG: you're saying like checking the poster from the backfed comment?
#omz13Talking of webmentions... when one is received and an attempt is made to get the source (to check for links), what is the best way of handling 5xx errors (my idea at the moment is to retry a few times - in case its a transient fault - before giving up and rejecting)
#omz13For transient errors (particularly 502/504 ) I was planing a retry at e^n, and give up after 24 hours... perhaps the 24 hours is too genereous?
#sknebelI feel like once you have the infrastructure for more than a few minutes (meaning you likely have some scheduling/storage mechanism), giving a generous time is probably not more effort on your side, so why not give it a lot of time?
#omz13True. I finally got my scheduler working this morning... so the effort is as simple (now) as just saying what the retry limit is before saying "I give up"
#[KevinMarks]fair this is a dev topic - I wonder about 2 things - mung HN format into Microsub, so all microsub apps are hn readers too
#[KevinMarks]secondly, look at those example apps and see what it would take to swap out the HN api for websub
[grantcodes] joined the channel
#[grantcodes][KevinMarks] I had the same idea but you could to a microsub & micropub to Twitter API bridge. But then I thought it was dumb 😅
#[snarfed]omz13: i believe we've only ever seen *one* actual native spam webmention
#[KevinMarks]well, you already kind of have that - you can read twitter with granary, and post to it with silopub
#[snarfed]KevinMarks i think so! tried to find it on the wiki with a few searches but didn't. aaronpk talked about it a little at the time, maybe he could find and add it? it's history! we should document it!
#@tomcritchlow↩️ I am ashamed to admit I have not been able to get webmentions working :(
I like the concept but....
Disqus: ad-tech funded nightmare (but nice ux!)
Hypothesis: non-profit with poor UX but strong ideas
Webmentions: a commune in a forest? (twitter.com/_/status/1149784378941345792)
#@iChris↩️ That's probably a fair assessment. It feels like if the tech folks got behind something like webmentions, it could be cool - but I know it's a long road before that might happen. Hypothesis seems like a good middle ground. (twitter.com/_/status/1149786294702215173)
KartikPrabhu, jimpick[m], [fluffy] and [manton] joined the channel
#sknebel[snarfed]: e.g. https://webmasters.googleblog.com/2019/07/a-note-on-unsupported-rules-in-robotstxt.html "Disallow in robots.txt: Search engines can only index pages that they know about, so blocking the page from being crawled usually means its content won’t be indexed. While the search engine may also index a URL based on links from other pages, without seeing the content itself, we aim to make such pages less visible in the
#[fluffy][snarfed] Oh I agree that “good enough private” is good enough for most uses of privacy. There’s just been things that I’ve posted as unlisted things that I’m still worried that someone decides to share the link with others without my knowledge/approval.
#[fluffy]And it’s hard to know when someone’s been doing it.
#[snarfed][fluffy] heh i think sknebel meant take robots.txt details here. unlisted/private is still probably ok for main channel
#sknebelYou are seeing this result because the page is blocked by a robots.txt file on your website. (robots.txt tells Google not to read your page). This tells Google not to read the page (which is how we generate a description), but it doesn't tell Google not to show the page in Search results.
#sknebelyes, but that doesn't get seen if it is in robots.txt, which is the kinda self-defeating trap
[asuh] joined the channel
#[snarfed]ah ok. so then we noindex and don't robots.txt?
#sknebelprobably? you could add another test post with that combo
#[snarfed]which somewhat addresses [fluffy]'s concern of robots.txt leaks. somewhat.
#sknebeland we check in a few weeks how far google has gotten from your public post and the wiki
#[fluffy]yeah meta robots and x-robots-tag are basically equivalent. I use the header in Publ because it’s easier than trying to put it into my templates.