#grantcodesOk infinite scroll deployed! Just need to enable it in each channel settings. Settings are a bit broken but works well once that's done
#grantcodesaaronpk: does your site repeat attempting to send homepage mentions if they don't work? I've not set anything up yet for saving homepage mentions
tantek, [snarfed], [kevinmarks] and leg joined the channel
#aaronpkgrantcodes: no I don't have any retry logic yet. Tho any time I get a like or comment on that post it'll send the webmentions again!
eli_oat and [cleverdevil] joined the channel
#[cleverdevil]Nice work grantcodes! One thing: we should add a confirmation to channel deletion. It’s a bit too easy to accidentally tap 😀
#aaronpkAre you using the Micropub API for that channel?
#AngeloGladdinghey guys i'm trying to do something that seems like it requires a two-step webmention but i don't think that's right.. before i go any further down this path anyone care to give an opinion?
#AngeloGladdingwhat i'm wanting to do is get that last link presented on Bob's issue page
#AngeloGladdingwith a "*syndication*" property set
#AngeloGladdingso what i'm currently thinking is to have Alice send an additional webmention back to Bob's issue referencing the "canonical" issue page on Alice's site
#@dustywebIf you have a text based format you want humans to look at and occasionally write by hand and you don't permit comments, you're making a terrible mistake. JSON, I'm looking at you. (Markdown, I'm also looking at you for having ugly comments as an extension.) (twitter.com/_/status/973239603942608896)
#aaronpkthe "expected" column is my own expected result, not necessarily what the spec actually says right now, since i'm also not convinced the spec says the right thing yet
#dgoldi did not know that was there, thanks aaronpk
#aaronpkI made that last week when we were having a similar conversation
#aaronpkI think this is only a problem because I am limited to 1-second resolution for these timestamps
#aaronpkso entries have their own published timestamp, and then there is a separate timestamp of when they were added to a channel
#aaronpkwhen retrieving a page of a channel's timeline, I sort by the added timestamp rather than the entry's published date
#aaronpkthe idea is to treat it like a chat log, new stuff is always added to the end
#snarfed(ooh actually the 50ms cloudflare worker limit is only CPU, not wall clock time, so may work after all)
#aaronpkthe problem is when a bunch of entries get added to the channel all within the same 1-second interval, they all share the same timestamp, so then a <= comparison no longer works right
eli_oat joined the channel
#snarfedagreed, second resolution seems insufficient
#aaronpkI tried disambiguating those by using the published date of the entry
#snarfedyou might also reconsider the "always append" semantics though
#snarfedeg my reader (newsblur) allows reversing order, which i care about, but i don't really care if new entries get interpolated or are always at the end
#aaronpkbut it turns out rss feeds are particularly low-res enough that the cases where a bunch of entries are discovered at the same time they often also share the exact published date
#aaronpk(I do have a special case when you add a new feed, those entries get interpolated based on their published date so that it doesn't flood the channel with new posts)
#snarfedapart from subsecond resolution, could you preserve order for items that come from the same feed fetch?
#aaronpkthat would solve the problem of trying to use the published date to disambiguate because the published date is definitely less distinct than the other timestamp
#aaronpkWhat I'm still conceptually missing is how to write the SQL where clause to limit the results properly
#snarfedtry first defining your ordering tuple - ie something like (channel, timestamp, order within source feed) - and then bound that?
#snarfedi may not fully understand the result limiting need though
[kevinmarks] joined the channel
#[kevinmarks]that is always hard. You need to make the next/prev links pass in an absolute offset of some kind, so that adding new entries down't throw it off
#aaronpkright this is all based on absolute references rather than sql's limit/offset where the pages can change
#aaronpkthe absolute reference to an entry is the two timestamps, but i'm changing that to (timestamp,batch_order)
#aaronpkthe paging reference string used in the API is an encoded version of that
#aaronpkso a client will ask for entries after (timestamp,batch_order)
#aaronpkone sec, I can post a screenshot to make this easier to see
#aaronpkso let's say for example the first page ends at entry_id=5 in that example, the paging ID returned to the client will be encoded(2018-03-13 07:53:26, 1)
#aaronpknow I need to select from the database the next page that starts at (2018-03-13 07:53:26, 1)
#[kevinmarks]enough places get their TZ wrong that published time can be very skewed
eli_oat joined the channel
#[kevinmarks]this is kind of why ryan ended up building snowflake
#aaronpkthis would be simpler if I only did append-only
#aaronpksnowflake doesn't let you backdate things either
eli_oat joined the channel
#[kevinmarks]true, it was for a central notion of truth
#[kevinmarks]the other approach may be to have epochs based on published time, so you append old entries to those epochs (like tantek's BIMs), but then you need to cross epochal boundaries
#[kevinmarks]with technorati we had recency based shards, so when you searched for keywords it could work back through the recent ones first before going deep if it was a rarer word
#aaronpkthis bug only happens in very specific cases
AngeloGladding and eli_oat joined the channel
#MylesBraithwaite👋, I'm currently in the process of developing my own IndieWeb application. Would it be okay if I create a Wiki page in the IndieWeb site for my notes? Or is that only for completed projects?
#dgoldaaronpk++ for implementing fixes on an experimental thing
#Loqiaaronpk has 124 karma in this channel (1587 overall)
#aaronpkyour suggestion of running the cron by hand is good, i'll wait to see what he says to that
#mylesb.cacreated /User:Mylesb.ca/Amalfi (+1302) "Created page with "'''Amalfi''' is an IndieWeb application built using [[Python]] and [[Flask]] that is currently in development by {{Myles}}. People using it on their own site: * {{Myles}} |..."" (view diff)
#LoqiIt looks like we don't have a page for "DNS TXT record" yet. Would you like to create it? (Or just say "DNS TXT record is ____", a sentence describing the term)
#LoqiIt looks like we don't have a page for "DNS records" yet. Would you like to create it? (Or just say "DNS records is ____", a sentence describing the term)
#dgoldaaronpk: hitting some composer-lock issues with the latest build of aperture
[eddie] joined the channel
#[eddie]!tell swentel regarding tokens, etc. not optimal but how Indigenous handles it currently is when you log out, it will send a token revocation request. Besides that, it assumes tokens are valid, but will provide an error if your Micropub request has an authentication error. It is assumed for now that if you have an authentication error, you’ll log out and back in of Indigenous manually
#dgold"Warning: The lock file is not up to date with the latest changes in composer.json. You may be getting outdated dependencies. Run update to update them."
#aaronpkWhoa really? What sort of sources? That should only have happened in very specific cases like when a large batch of entries was suddenly found and they had drastically different published dates
#aaronpkI’ve only had one instance of that bug in all my feeds
#dgoldit was when I started using aperture - don't worry, I saw them elsewhere
#dgoldbut, for the record, 100-odd posts in one channel, 30 in the other
#Loqi[Peter Stuifzand] I have been buildling a microsub server. It's not perfect, but it works. It works with Monocle. The code is open source and can be found on Github here: https://github.com/pstuifzand/microsub-server/
But sadly it seems I can't use my own authorizatio...
#ZegnatKartikPrabhu, JSON Schema (like an XML schema) can be used to check the validity of a document. In this case the JSON that you get from parsing microformats from HTML.
#ZegnatKartikPrabhu when you are accepting random JSON from a possible untrusted source (e.g. any Micropub client) it is good practice to filter such input and make sure what you are getting is what you expect. Like any other outsider input.
#[cleverdevil]And it generates the longer JSON schema output.
#[cleverdevil]The best thing for it would be to have a massive test suite of valid JSON data to validate.
#ZegnatNot sure if you win anything by specifying every mentioned property though. I guess it is nice because you get to force the uri format on some of them...
#ZegnatI think I made a remote rsvp once, that’s not going to get passed your schema for one, [cleverdevil]. As you only accept 4 fixed strings for rsvp.
#ZegnatYou can, KartikPrabhu. But if the parsed value in the JSON object isn’t an uri, you will not pass [cleverdevil]’s validation
#ZegnatWhich may be correct, if his server expects URLs for the photo property.
#[cleverdevil]Again, this was just my first pass based upon my *interpretation* of the documentation.
#aaronpkI think it's important to keep in mind the context in which the validation is being used
#ZegnatThere is a bit of a conflict between validating generic mf2 objects (which is what my schema tries to do) and validating for an actual usecase (e.g. accepting blobs for Micropub).
#aaronpkright, the difference between mf2 json syntax validation vs vocabulary validation
snarfed joined the channel
#aaronpkboth of which are important but for different use cases
#ZegnatI can defintely see why you would want a collection of URLs for the photo property on a micropub server. And from all of the h-entry documentation we have, that’s what [cleverdevil] is specifically validating: URLs.
#bearI have 19G of website html and the parsed mf2 output if you want :)
#aaronpkotherwise you could get into some trouble if you store JSON that is not mf2 JSON later when you go try to access the data
#aaronpki'm a fan of treating the vocabularies in a much looser fashion, accepting anything and rendering what you can
#ZegnatActually, I think I found an mf2 example in the micropub spec that didn’t validate against my generic schema... Oh well. Another issue for tomorrow! Nighty-night.
#[cleverdevil]Fair enough. But, that makes the CMS side *much* more complex.
#aaronpkbut certainly there could be use cases where you'd want to strictly validate the vocabularies too. I just think they are totally different concerns
sebsel joined the channel
#aaronpkalso I wouldn't say "much" more complex... I basically did all that logic in the Monocle templates and they aren't huge
#[cleverdevil]If anything, I'd prefer to validate in a rigorous way, and then normalize/fix before storing, if I want to accept it.
#[cleverdevil]Sure, but you're using a bunch of tools that you created 🙂