#[eddie]!tell aaronpk: I got a Micropub post to Aperture to work except the content. I did a content attribute inside of properties, with an array of objects. It contained an html and a text attribute which both contained a string. Aperture is returning all my properties except no content attribute.
#Loqiaaronpk: [eddie] left you a message 9 minutes ago: I got a Micropub post to Aperture to work except the content. I did a content attribute inside of properties, with an array of objects. It contained an html and a text attribute which both contained a string. Aperture is returning all my properties except no content attribute.
#Loqiaaronpk: [eddie] left you a message 9 minutes ago: I’m assuming I did something wrong? I sent it via JSON
jjuran, jeremycherfas, [gerwitz] and [kevinmarks] joined the channel
#@TeamWembassy@stevepurkiss yea i get it i think it serves a good purpose. my goal is to have something super simple and specific use case, kind of a drop-in and go sort of deal. I probably will lean on them for things and give them some of what im doing with Webmentions as well. (twitter.com/_/status/962995099675209729)
[kevinmarks], [tantek] and tantek joined the channel
#ZegnatIt’ll be interesting to see the reaction of the public. The reaction of commercial websites might be to get rid of the warning ASAP, which is probably exactly what Google wants
#tantekDoes anyone else here omit replies from the Atom/RSS feeds?
#tantek(this would basically drop all "respones", likes, reposts, etc. from side files)
#tantekbecause AFAIK, no "old" feed readers do anything intelligent with those anyway
#tantekso presumably they show up as noise for the folks using such software
#sknebelI know many hide them from the homepage feed by default, not sure if anyone treats atom differently (well, many people use granary, so those don't likely)
#sknebeland if your post has even a minimal context in textual format ("I like...", "Reply to") I don't think they are useless in a traditional reader
#tantekthe "Reply to" context is only on the permalink display
#tantekI do put "likes ... " in the fallback summary
#tantekuntil someone shows me a feed reader that *does not* support h-feed, AND yet somehow supports doing something useful with like/reply posts, I'm not going to worry about it
#tantekgoing to try deploying it and see if anyone notices
#tantekthis is part of my work toward implementing issues / replies to issues
#tantekI've decided issues (like github issues) are too out of context to bother showing in feed files
#tantekand since issues have names and content, I'm building them out of article posts that are in reply to a particular issues list
#tantekI'm also omitting them from my "articles box" on my home page
eli_oat, kapowaz, tantek and [gerwitz] joined the channel
#[gerwitz]+1 FWIW … replies and reactions require the context of a content resource. I don’t like elevating them to be a peer to original posts. (Ev Williams is the smartest-on-this person I know who disagrees.)
jeremych_, jeremycherfas, [kevinmarks], AngeloGladding and barpthewire joined the channel
#aaronpk[eddie] I just pushed up a bunch of changes for read state tracking! I'm excited to try it out! let me know what you think of the api
#tantekhmm except the src needs a trackID whereas most soundcloud links have a human friendly path
#Loqi[Aaron Parecki] Experimenting with auto-embedding content
snarfed joined the channel
#schmartytantek: i used to autoembed soundcloud links. generating an embed iframe from a track's browser-readable URL required an extra processing step to get the trackID, as you noted.
#KartikPrabhutantek: I think I do something similar for youtube and vimeo and Flickr
#KartikPrabhuextract some sort of ID from the URL and then embed it
#schmartyoops. by processing step i meant fetching the page to find the trackID embedded somewhere in it.
#KartikPrabhuschmarty: oh! the URL itself does not give you that?
#aaronpktheir oembed API will give you the info you need
#KartikPrabhu<sigh> if something is in the API it can on the page no?
#schmartyKartikPrabhu, aaronpk: i hadn't thought to try oembed, so i fetched the page and looked for a proprietary meta tag containing a soundcloud: url
#snarfed[cleverdevil]: re AWS and Athena (moving here...)
#snarfedAWS definitely has an equivalent competitor in GCP. migrating either direction is totally doable, esp if you're using basic services like EC2, S3, etc
#[cleverdevil]Ah, cool, I thought here might be more appropriate 😉
#[cleverdevil]The appeal of having all of my data on S3, which is super redundant, reliable, and distributed is pretty enormous.
#snarfedhonestly for a personal web site, you can probably get by with just filesystem dates, simple linear search, etc, and maybe google site search or aws's hosted elastic for site search if you really need it
#aaronpkI wonder if I could use that for indexing the data in compass
#[cleverdevil]But, since MongoDB came first in Known's history, the MySQL database is basically a bunch of tables with indexed columns that also have the JSON document embedded.
#snarfedhuh ok. maybe start with a simple flat file backend then, so it's usable both on and off S3
#[cleverdevil]Mostly just for saving and retrieving by ID.
#aaronpk[eddie] if you add a syndication link from your indie post to your github post, i'll be able to reply to your indie post and have my reply also syndicated to github :)
#[eddie]aaronpk: I definitely want to get syndication links added, but that will probably happen once I update how my site syndication works.
#[eddie]Right now I syndicate by posting to my site, updating jekyll and then manually sending a Webmention to bridgy publish through Telegraph. Which means until I get automated webmention/syndication logic in my site, I would have to copy the url returned on telegraph, open the post on my server and manually paste the syndication link in.
#schmartyi've been using tt-rss + woodwind for a bit and am ready to move past them. this flurry of new microsub stuff makes me feel like it'd work out for me okay. :}
#tantekbell could work (that's what FB uses now instead of the globe!)
#aaronpkaha that's probably why it was in the back of my mind
#[eddie]Hopefully this code signing stuff is an easy fix with Apple so I can push out my bug fix release on Indigenous and start on support for some of the new Microsub read/unread stuff! Not knowing the state of stuff in my channels has been one of the biggest things messing with my Microsub use
#aaronpkyeah I am looking forward to that for sure!
#aaronpkI think next on my list is getting source info into entries in case there is no author info
#tantekok added more specifics about notification iconography in FB and IG for now
#schmartyaaronpk: thanks! i don't think i'm ready to start work on a client yet. i'm interested in getting my hands dirty setting this stuff up to learn the state of things.
#[jjdelc]I got a question about bridgy and my identity. My identity auth site is www.domain but my blog and posts live under jj.domain so it looks like bridgy is never finding my new posts when scanning www. It is assuming that my posts will be on my identity page. or am I missing something? Only when I tell bridgy to crawl a specific post it can then post the mentions to it.
#[jjdelc]both my identity page and blog index point to each other with rel=me, but there's nothing special about the link in my identity page to tell "here be the posts" and not any other outbound rel=me post
leg, [kevinmarks] and snarfed joined the channel
#KartikPrabhu[jjdelc]: I think you can use rel-feed to point to the page that has a feed of posts
#snarfedi recommend adding it to your profile (and re-authing bridgy) anyway. it behaves subtly differently when it believes a web site is actually yours vs not
#[jjdelc]I have another question about the inclusion of webmentions in an entry. Should the list of webmentions be inside the h-entry? because when I parse the links in an h-entry I will find all the webmention links again, but my entry isn't really pointing to them. I've seen some implementations and the list of metsions is inside the h-entry.
#[jjdelc]I that the links I parse in my posts should be inside my e-content to get around it
#KartikPrabhu[jjdelc]: I have mine inside the h-entry markedup as p-comment. But when I send my webmentions I only use e-content
#[jjdelc]Ah so many pages to read, I didn't stop through that one yet
#KartikPrabhu[jjdelc]: it is still ok to ask here since others might know which page to read :P
#[jjdelc]@KartikPrabhu thanks, do you include your "u-bookmark-of", "u-like-of" likns inside your e-content as well?
#KartikPrabhuoh! good point. I don't so I wonder what I'm doing for webmention sending :P
#[jjdelc]Right now I have those outside my e-content because it is not content I've written (not my body), but a different attribute of the post, still inside h-entry but outside e-content
#aaronpkoh yeah, this is not just my site, it's also the webmention.io avatars. maybe someone with a lot of traffic recently started showing webmentions from there
#sknebelalways surprised how expensive AWS bandwidth is
#snarfedhuffduff-video has been serving multiple TB per month recently. :( i fixed robots.txt and blocked one bot recently, helped so far, but still.
#aaronpksknebel: yeah i'm gonna wait til the end of this billing cycle and see if january was just an outlier. but if this keeps up then i'll probably move them to a linode
#snarfedaaronpk: do you serve the files? or does S3?
#aaronpkspeaking of things that are barely worth the trouble, I want to delete everything from aws glacier that I was experimenting with for a while. but you can only delete the vault if it's empty, and there is no UI to empty it.
#aaronpkthis is costing me $0.30/month but it's a little irritating
#sknebelit's probably a robot doing it, but something like that (is it public what glacier is based on? I remember there being tons of speculation at some point, but not sure if it ever was clarified)
#tanteksknebel, it could be a robot literally fetching a hard drive