wavis_, cjk101010, benward_, MrClaw, tommorris__ and wavis joined the channel
#aaronpkdoes bridgy include the canonical URL of a post if that person is a bridgy user?
#aaronpkas in, for one of my tweet replies which doesn't link back to my post, would bridgy's permalink for it include my aaronparecki.com URL for the post?
#aaronpkthe "currently dont have a base template" case is why the http header is supported too, so that you could return the header at the web server level
#bengowebfinger and openid connect use it, which is where I've seen it.
#aaronpkthat's actually the first good reason I've heard for using .well-known
#beari'm curious how you do any header updates then if you have all .html files
ramsey, Leeky and jonnybarnes joined the channel
#bengoThough tbh, that may not be necessary. A domain with that constraint could just put a static /.well-known/webfinger document that always responds with { "links": [{ "rel": "webfinger", "href": "webmention.io/whatever" }] }
myfreeweb joined the channel
#bengoWebmention clients that want to support the use case could reasonably try to find the webmention link relation via webfinger
#bengo"Find links from specific resource on domain" is basically what webfinger is for anyway
ramsey, Leeky, jonnybarnes, tommorris__, zz_tridnguyen, uranther, myfreeweb, bigbluehat__, wavis, halorgium, dhalgren and KartikPrabhu joined the channel
#bengoFor the case where I'm just going to delegate all webmentions to another service, It's also nice to save the bandwidth of link headers on all requests, and just specify in well-known.
#bengoRelated question: Is it valid for me to only respond with that header on HEAD requests?
#tantek!tell snarfed great post re: keep Bridgy Publish simple. Makes a lot of sense and thanks for putting up with all the feature requests in github issues!
#voxpelliaaronpk: to simplify webmention implementation, could resolving target redirects perhaps be optional? so unless a site uses shortlinks itself it can skip resolving targets?
#voxpellihasn't really resolve targets in his endpoint and might perhaps still not be doing it
#aaronpkvoxpelli: I think that's reasonable. Trying to think of the cases where I am resolving redirects right now.
#voxpelliaaronpk: the webmention client has probably already resolved the redirect when looking up the endpoint :P so shouldn't really ever be necessary
#aaronpkwell that normally happens transparently by the http client
#voxpelliwell, never mind, of course it's necessary because one still needs to know what url to check for in the source
#voxpellione could wish for indieweb sites to never actually link to shortlinks in actual code but just accept shortlinks in the UI:s :P
#aaronpkeven if you don't have shortlinks for your own posts, you might change your URL structure at some point and have a bunch of URLs change
#aaronpkand you'd still want to be able to accept webmentions for the old URLs
#aaronpkassuming you're sending redirects for those, following your own redirects is a perfectly sane way of doing that
#voxpelliI think the matching between a target URL and an actual entity within ones system should be entirely up to the implementation then, one probably wouldn't follow HTTP redirects to solve that case
#aaronpk(I certainly do, because it's easier than any other way)
#snarfedvoxpelli aaronpk: agreed, whether/how to resolve wm target URLs is implementation specific, optional, and probably outside the spec...but there are definitely use cases for fetching and following HTTP redirects
#Loqisnarfed: tantek left you a message 11 hours, 59 minutes ago: great post re: keep Bridgy Publish simple. Makes a lot of sense and thanks for putting up with all the feature requests in github issues! http://indiewebcamp.com/irc/2015-11-30/line/1448871833483
#snarfedthe obvious one is shortlinks you didn't create yourself - t.co, dlvr.it, etc
#aaronpki guess the question is if resolving your own URLs is implementation specific, is there actually any reason to require a WM receiver to follow redirects of a target?
#aaronpkalso note that it's only in the "protocol summary" where it mentions following redirects, the actual spec content doesn't mention that at all
#voxpelliit adds unneeded complexity and is not something to be encouraged
#bearmost http get libraries won't even give you the redirect tree unless you ask for it - so doing a GET will just resolve any redirects
#aaronpkthis is for target, which the receiver isn't actually fetching
#voxpelliaaronpk: as the receiver should validate whether the receiving URL is acceptable, one can easily show an error if it fails (+ would be hard to actually give such an error synchronously if one follows redirects)
#voxpellisaw your comment now, lets leave it at that :)
#snarfedvoxpelli aaronpk: again, one clear use case for receivers following target redirects is when it's a shortlink someone else created, e.g. a t.co link
#kylewm.comedited /2015/SF/Guest_List (+301) "/* Participants */ re-merge Indie+wiki RSVPs and include rsvp_url in the attendee block; alpha-sort; update counts" (view diff)
#voxpellisnarfed: yeah, but should be left to implementer to decide whether it wants to support receiving such mentions or not
#aaronpksnarfed: do you accept those kind of target mentions?
#voxpelli(redirects are kind of a rabbit hole as well, if one wants to support redirects on eg. GitHub Pages one has to support meta-refresh – eg. Google does, but then you need to parse HTML as well :P )
#aaronpkI don't think people should be required/expected to accept t.co mentions. If someone is sending a webmention to a link they found via t.co, they can resolve the redirect themselves and send you the real target.
#voxpelli+1 on bear, I will also reject of domain doesn't match a registered domain in my endpoint and I don't think I follow redirects (might have changed as part of my Salmention implementation as I then need to look up the target anyhow though)
bengo joined the channel
#ben_thatmustbemei have shortlinks they could be mentioning, plus my internal redirects are messy, easier to follow exactly where something pionts than try to deal with recalculating exactly what it point to
#aaronpkI'm now considering whitelisting my domain and my short domain
#aaronpkwebmention.io also follows redirects, and it leads to a lot of noise in the database. Now that you can register domains with it (to set up callback URLs and such), I'm going to consider only accepting webmentions for registered targets
#ben_thatmustbemei think the idea is to look at what is actually done in social web right now, not just IWC, if you are drafting a spec, you have to consider that one of the largest silos does this, so its important to consider others might want to as well
#voxpelliben_thatmustbeme: not just if they're doing it but also why they're doing it and how that may change etc
#kylewmI feel like KartikPrabhu added t.co following because someone (Doug Schepers?) wanted to webmention him from twitter
#voxpelliben_thatmustbeme: one possibility for Twitter would be to onclude the webmentioned links as link-tags or link-headers also btw
#aaronpkfollow-up question then, for everyone who is following target redirects, do you have a limit on the number of redirects you follow before giving up? (kylewm already said his limit is 1)
#kylewmi don't have any reason for it to be 1, probably just laziness
#bearI feel once you start following you should follow them all to be a good HTTP citizen
#aaronpkthere is no "all" since it may be an infinite redirect loop
#kylewmsnarfed++ bridgy pulling in mentions from twitter is so freaking cool
#bengoI just re-added to my twitter bio the other day, donno if mine just doesn't work because it's caching my old twitter info
#bearI use the python requests library and it has a max redirect limit - not sure what it is, but that's why it's in place
#snarfedre wm target redirects, i use the wordpress plugins blindly; i'm not sure what they do, but i expect they don't follow them...which is fine with me in practice.
#snarfed(i just mentioned t.co etc links as an example of why you *might* want to follow them, not that everyone should or that it shoudl go in the spec)
#bengoaaronpk: that sounds like it happens iff my site is www.bengo.is? which doesn't respond
#ben_thatmustbemei feel like if someone take the time to send you a webmention for a URL that is not on your domain its probably something that will redirect to you
#bengoI guess one way of resolving my question is: Anyone indieauth with twitter recently?
#bengoHad this queued up from when I was hacking last night but no one was on: I'm pretty curious to hear a little bit more about how indiewebers are storing their data. e.g. do you have all your tweets in a SQL database? Do you go fetch some external API (or several) every time someone requests a page on your site? Do you memcache? Are your posts stored in a database server or on the filesystem? Folks can post here about
#bengothat, or maybe would make a good 15m open forum at iwcSF
#LoqiThe database antipattern is the use of a database for primary long-term storage of posts and other personal content (like on an indieweb site), and is an anti-pattern due to the additional maintenance costs, uninspectability, platform-dependence, and long-term fragility of databases and their storage files, as documented with specific examples below https://indiewebcamp.com/database-antipattern
#aaronpkplenty of implementation experience there :)
#bengoMakes sense. I am essentiall static. i.e. content is in repo with server code. But online publishing (e.g. micropub) will require deviation from that
#bengoSo I'm particularly curious if there are micropubbers using filesystem only in a way that will scale to n+1 hosts, and how they've done it
#aaronpknot exactly, there are some examples of people writing a micropub endpoint that commits to the git repo and then publishes the static site
#bengoIf I'm going to use a database, I don't want to pick github.com, which I don't own.
#bengoIs using a corporate web service as a database really less of an anti-pattern then running a database server? I'd vote no.
#aaronpkusing github as a host isn't really any worse than using heroku or any other FTP service
#snarfedright. db vs filesystem (vs git, etc) is pretty orthogonal to which hosting provider you use, and at what level in the stack
mlncn joined the channel
#bengoTrue. If github.com just happens to be the currently configured GIT_URL you've configured and it's nothing github specific, then awesome.
#snarfedfor indieweb specifically, the consensus has generally been that you definitely need to own and control your domain. below that, though, it's totally up to you how and where you host and serve your site
#snarfede.g. there are people on blogger and even tumblr who are fully indieweb and participate, even including indieauth and sending/receiving webmentions
#bengoYeah I definitely wouldn't want to say anything normative that those aren't indiewebers. They best part is everyone gets to decide. Moreso curious about how others have chosen to do it
#snarfedi think we often miscommunicate it as "using a db for any webapp is bad," when we really just mean "using a db for your own personal site may be bad." very different things.
#bengoIf someone is currently supporting micropub publishing without using a networked database and can also scale past one rw filesystem, that's kinda what I'm curious about
#bengo(and doesn't outsource that user story to some other free service that is unlikely to be around in 5yrs)
#snarfedeh. honestly scaling isn't really a problem for any of our personal sites, db or fs or whatever they're backed with
johnstorey joined the channel
#snarfedaaronpk may be the one exception, and even then only because he includes location, etc
#snarfedsensors generate enough data that scaling actually matters. people, less so.
#aaronpki actually have a totally separate thing that stores all my location data. turns out a mysql/postgres database for that gets awkward fast, so now it's all in the filesystem
#snarfedand "outsourcing" and "unlikely to be around" aren't necessarily problems. if you own your domain, you can migrate elsewhere
#snarfedi'm resigned to/expect to switch host and/or CMS every ~5 years or so at this point
#aaronpkthe "database antipattern" page is essentially a list of issues people have had with using a database as the primary store of their website and trying to keep it around more than a few years
#aaronpki agree it's not the best name for the page :)
#LoqiMySQL is open source database software that is often used to store data in several indieweb CMS's like Known, and various other CMS's, e.g https://indiewebcamp.com/mysql
#voxpellibengo: I'm 100% static on my blog but has Micropub support through third party service that does git voodoo magic
#gRegorLoveI believe there's "pro"-db experience on those pages.
#bengoI guess my long term interest is in architecting my indieweb stack such that I can truly trust it will continue working for many years. Yes that is a bit pedantic, but hey, it's a hobby.
#gRegorLoveCatching up on the webmention target conversation, I follow redirects when processing async, but on initial receiving I am checking the target is a URL on my domain, so it wouldn't accept t.co links. I hadn't thought about that before today.
#gRegorLoveI've been on MySQL over a decade and pretty pleased with it.
#bengo(storage size over time is an issue too, especially as self-sensors become more prevalent. See: ipfs and storj)
#voxpelliaaronpk: on the redirect count topic – I think I limited Bloglovin to 5, but should be implementation specific if one wants to follow both third-party and internal redirects as that might result in more than 5 in worst cases maybe
#snarfedbengo: will you also write your own OS? run a generator for power? become your own ISP? and get it to tier 1, so you can peer with the other backbone networks?
#bengoThinking easier than doing. And to your point about practically being in control means you can deal with disruptions to external services, that's what I plan to do until I have my indiemark way higher and that other stuff actually matters.
#bengoDonno proper wiki page to link to wrt avoiding getting lost in the sky
#voxpellibengo: so basically I do progressive enhancement on my content – my core content is static and versioned through git, but webmentions and such that are just nice to haves are stored in DB and pulled in through JS or linked to on external sites
#bengoSo definitely not hating on filesystem folks (like me right now)
#voxpellibengo: looks like something that could fit with my Micropub approach pretty well ;) you would just need an alternative formatter that doesn't use YAML front matter
#Loqi2016-01-01-commitments are implementation and launch commitments publicly made by the IndieWeb community to ship on their personal sites by 2016-01-01 00:00 local time https://indiewebcamp.com/2016-01-01-commitments
#kylewmHoodie is an open source library for building web applications; it is intended to be fun and easy for frontend developers to build applications that plug into the Hoodie backend.
#tanteklast I checked hoodie.io it was a js;dr framework - anyone have any recent experience to the contrary?
#snarfedon an unrelated note...does anyone using the facebook API have experience with how it consolidates photo posts? e.g. into "X added 2 new photos." post objects
#aaronpkjs frameworks are fine for apps if they're not storing or serving content that way
#tantekmy limited anecdotal experience is it happens automatically (perhaps on the presentation side?) when sequential photo posts are published, less than 24 hours apart from each other
#aaronpkthe last couple weeks i've been using a JS IRC client! it stores its logs in plaintext files on my server though so that's nice!
#tantekas in I've seen it cluster photos like that, except when I post a photo sequentially *the next day*, it gets its own stream "item" in my "timeline"
#tantekinstead of being part of the previous "cluster" of "added to an album"
#tantekalso if I post a photo, then a note, then a photo, they are all distinct items
#tanteknot sure what happens if I do that, then delete the note, whether a subsequent reclustering occurs
#snarfedyeah, the user-facing behavior is somewhat understood, but only somewhat. how it's reflected in API objects and ids - the part i need - is even less understood. :/
#tantekyou may not need full understanding in order to solve the problem
#kevinmarksresponding to the earlier question, my webmention implementation doesn't follow target links initially to verify, but does to look for webmention endpoints to ping. I could change that though
#LoqiDoPA is an abbreviation for Denial of Productivity Attack, a method often used by trolls and non-implementers (perhaps without explicit maliciousness but rather misfocus) to slow down or prevent progress by misdirecting creator selfdogfooders into responding to hypothetical problems, instead of their own real world itches https://indiewebcamp.com/DoPA