npdotyyou could add a bit of javascript in your page so that when you referred to a place you included a geo URI and then had a little click handler so that the user loaded an HTTP page to OpenStreetMap or whatever
tantekbret - if you're trying to figure out a good UX for some data, look at how other services do UX for that kind of data, document it, note the shortcomings in their interactions (e.g. Twitter showing "Paris, Paris"), and do something similar but better.
tantekfor just general unpleasantness or unproductive conversation, we have a community that applies the Socratic and scientific methods and asks questions "how does this apply to your personal site?" and go from there
LoqiInstagram is a popular image hosting silo most well known for square photos that have been processed with an image filter http://indiewebcamp.com/IG
tantekKevinMarks: not really interested in yet another ID proposal - always filter with the question, are you using it on your own site? or when do you plan to?
kylewm"Every user floats by themselves, interacting with who they please. This denies us the ability to build communities, to set social norms, and to enforce them."
tantekthat's the closest we have to any kind of identity issue in this channel, and they're not really sock puppets as they're similar enough to imply the same identity
KevinMarks__Eastgate's proposal bothered me because it was so obviously elitist and naïve, but then it reminded me of requiring your own domain (which does need money, a name and address etc)
kylewmthe hand-waviness bothered me more ... holding identity in escrow, to be released if the person does something bad. who is doing the escrow, who decides what constitutes "bad"
pdurbinkylewm: it's a double edged sword. you don't own your comments on Google+ (the ones you make on other people's posts). this is different than Twitter
pdurbinit looks like comments at https://aaronparecki.com/notes/2014/11/01/1/ for example were collected from Twitter but I assume aaronpk could delete them if he wanted, that they are stored on his site somehow
KevinMarks__Right. The stuff documented in the post linked from Eastgate is shocking and distributed. Strategic deletions, all kinds of organised harrasment
bretkylewm: i updated the readme to base if you are curious https://github.com/bcomnes/base I added a general outline of what is done and what is coming for that
mlncn, KartikPrabhu, alexhartley, j12t, EOGreer, thedod_, npdoty, Erkan_Yilmaz and friedcell joined the channel
TheNewYorkereverything in the archives section will get fed into a php script that extracts the date component and uses it to select which version of the target resource to serve
TheNewYorkeron the authoring side I'll have a custom cams written in go that will detect changes to the working directory and preserve a single file corresponding to each unique version of a file's contents along with a log mapping URI's to file versions
TheNewYorkeron my last project I got stung by relying on the big Way Back Machine to have captured some lost site revisions that got clobbered by accident
TheNewYorkerall file revision will renamed to an hash of their content indexed by the Unix Nano timestamp at which that unique hash was first generated
TheNewYorkerin that case I might include a redaction link in the timeline database to redirect to the nearest equivalent resource (i.e. a transparent png with a "redacted" caption or say a substitute web page with a callout that "The material originally appearing here was redacted due to copyright uncertainty")
TheNewYorkerI might also want to implement an Errata mechanism in PHP so I could preserve the original content but also splice in a widget to demonstrate that the page had been revised to correct a broken url or embarrassing typo.
fiatjafexample: I read an article at some webpage that does not support webmentions (it doesn't have to be a personal website or a blog, it can be any webpage), I comment about it in my website, with a link to the commented page. my CMS sends a webmention to this webmention hub, or to various different webmention hubs. the person whose website was commented can later go to this webmention hub and see if someone commented about his website.
fiatjafother people reading that same webpage can check (manually, or using some browser extension) the webmention hub for what others have said about that page and join the discussion by posting about it in their own pages.
aaronpkOk poll: what would you be more likely to do. a) include an additional tag on posts you want to syndicate to IndieNews or b) create a separate feed where everything in the feed is syndicated?
GWGOf course, I have a long list of things. If people started using Indienews, I might escalate that. I'll have an Indienews metabox in my syndication link generator before the end of the night
bretme to! let me start over. I like the way it works now. I indicate I want my post syndicated to indienews in my front-matter and a link to indienews with the syndicate-to class ends up in the post and ping indienews is sent
bretthe thing is, now that i think about it, people generating feeds for specific posting targets to pull/push from seems like it takes the burden of writing additional syndication adapters
GWGaaronpk: It is on my future plans list. I rewrote the Syndication Links plugin to allow for a variable list of targets. Adding optional Bridgy Publish support is a future feature
GWGI have the code to read and store the response from Bridgy, which includes the URL, but I haven't tested it or implemented an interface for it. But for Indienews, I wouldn't need to check for a response.