#prologic2020-07-21T00:20:42Z [Discovering new twtxt users User-Agent data](https://github.com/prologic/twtxt/issues/14) is something I want to discuss with the wider community. The technical impl details are easy, but I'm not sure how to present the data on [twtxt](https://twtxt.net). Thoughts?
#prologic2020-07-21T00:22:33Z (re Discovering new twtxt users): Some ideas: 1) A new dedicated view/page 2) A "special" internal/builtin feed you can follow 3) Inject discovered users directly into the [/discover](https://twtxt.net/discover) view 4) Something else?
#jackyis proof-of-delivery important? because someone can just throw an address in a user-agent (the problem with them today)
#prologicI put more (maintained) clients on that IndieWeb twtxt page :)
#prologicof the ones I know are still maintained and work
#Loqi[prologic] twtxt: 📕 a twtxt client in the form of a web application and hosted service that provides a self-hosted, decentralised micro-blogging platform. No ads, no tracking, your content!
#prologicI'll fix it (embarrasing) and redeploy shortly
#jackywill reply with thoughts on the GitHub issue
#jackybut tbh I think that discoverability, in general, is hard
#jackyand I also do think that placing it in the user agent will potentially allow for spoofing :(
[tantek] joined the channel
#[tantek]yup, can't depend on UA for anything like that
#aaronpkthe trick is to stop thinking about the user agent header as something special, it's just unvalidated external input when it's received, just like anything else
#prologicAnd yes its hard to solve for perfectly, but maybe we don't have to?
nickodd, gxt, cweiske and prologic joined the channel; nickodd left the channel
#ZegnatI think UA is as valid as any HTTP header, as aaronpk said, if you are on the receiving end you just need to treat it as any other random input.
#ZegnatFeed fetchers have been using UA to both identify themselves but also give information about how many singular subscribers a feed has on their end, that has been working well from what I understand. Have not heard of anyone spoofing those.
swentel, gRegorLove, dckc and gxt joined the channel
#Ruxtonthe addition of github profile readme's is gonna allow even more rel="me" links :O
#[KevinMarks]maybe in the markdown library you're using?
[pfefferle] joined the channel
#ZegnatNow I am wondering, are there any unsafe rel values that absolutely must be sanitised?
#[KevinMarks]it depends on how you're using it - you do want to remove rel="me" and rel="canonical" etc if its, say, someone else's comment embedded on your blog
#prologicI'm not familiar enough with the rel attr sorry
#prologicBut yes its configurable behaviour for sure
#prologicif you use twtxt.net or want to use the software itself and run it yourself
#prologicplease by all means files issues or contribute via PRs :)
deltab, jeremych_ and [tantek] joined the channel
#[tantek]prologic, if you check your web access logs, you'll likely see that most "people" are bad actors, that is bots lying with their user agent string. it's basically noise.
#aaronpkif everyone used well written software there would be no spam
#aaronpkas soon as you give people a mechanism that can be gamed people will take advantage of it
#aaronpkMastodon stats are another example of this. There's a special url a server can host that describes the instance reporting things like number of users and number of posts. That gets aggregated on some websites to show total mastodon users. But it's trivial to make a fake one and report 1,000,000,000 users if you wanted to
#craftyphotonsFun side thing I'd like to do with my personal stuff is add webauthn to everything of mine for MFA so I can use my Yubikeys
#craftyphotonsThe Ruby world has a good reference Webauthn gem out there it looks like so it should be pretty easy to add to anything on Rails/Sinatra/etc
KartikPrabhu, nickodd and [tantek] joined the channel
#[tantek]craftyphotons++ for skepticism and defensive design thinking
#@0daysfordaysWhenever something I've written ends up on the HackerNews front page, I get blasted with Webmentions from sites that seem to repost scraped content. Is that... profitable? Is there some cool SEO scam I'm missing out on? (twitter.com/_/status/1285609773686304768)
#jjuranI heard if you do that you get double the webmentions back
cjw6k joined the channel
#GWG_I am thinking of writing a post on how people don't understand what IndieAuth is
#LoqiStatic site generators are programs that take a set of flat text files on disk and transforms them into a set of static html files ready to be served by a standard web server, or some variation of this example https://indieweb.org/static_site_generator
#[KevinMarks]Is that what you mean, or are you thinking more about a single page creator?
#kiero_[KevinMarks]: it can be static site generator but I'm looking for something with predefined templates