tantekGWG - scrapers gonna scrape. They're too lazy to bother with proper parsing. So quite the opposite. The expected effect of markup on that risk vector is negligible if anything.
danlykeGWG - what's your model for misuse? My hope is that marking up my content will make it easier to scrape, therefore easier for searches and intelligent links, etc.
danlykegRegor` I already have some of my source files marked up with "don't show this", I should stuff that into a database, give it a token and some sort of auth for serving it up, and attempt to load the elided parts of the document via JavaScript after load. If I choose to stay with this "render everything to static pages" model...
KartikPrabhuGWG it is impossible for any thing on the internet to be protected from download in a fool-proof way. Having a clear copyright statement and understanding it is the only way
danlykeGWG yeah, republishing and faking stuff is a thing. Hopefully whatever discovery/search mechanism we end up building will be better at sourcing stuff than Google is.
KartikPrabhuthoughts wrt to response-context and copyright. If I "like" a photo I'd want to display it is the response-context on my site, but this might be restricted by copyright (maybe it is under fair-use but who knows). Should response-context do some sort of license-discovery to avoid reproducing copy-righted images?
tantek^^^ GWG *that* is how you deal with people scraping/copying your content - you seed it with watermark text codes like permashort citations that all link / lead back to your site
KartikPrabhutantek: yes. that was my initial idea but my autosend for webmentions broke and so currently I have ditched that. I hope to go back once I get auto-sending running again
kylewmtantek: cweiske has thing like that, but going even further, where you could mouseover a link and see the send history for it, success/failure etc
Loqipetermolnar: tantek left you a message on 12/19 at 5:19pm: do you have a feature you'd like to launch and start using on your site 2015-01-01? check /ownyourdata and /indiemark for some ideas.
petermolnarI have a topic for today: now that many of us enabled SSL ( TLS ) for their websites can someone share their opinion on ssl ciphers and cpu usage? :)
petermolnarI've recently found that my server ends up in TCP timeouts from blitz.io's point of view after a massive amount of concurrency; after changing the allowed ssl ciphers, this was drastically reduced
bearpetermolnar - some of the older ciphers are indeed very cpu intensive. I found that the Mozilla OpSec team does a great job of outlining which ciphers to use and why: https://wiki.mozilla.org/Security/Server_Side_TLS
bearyou can improve your nginx performance by tweaking some of the settings - which are listed in that page for nginx: ssl_session_cache, ssl_dhparam specifically
GWGPetermolnar, bear... I need to do that. I just tweaked my SSL settings based on the mozilla intermediate ciphers and got my grade uo. It had dropped to a B
tantekstill manually using Bridgy to POSSE faves of tweets to Twitter, and manually using curl to send webmentions, but I've got indieweb like posts working!
tantekthere's almost no CSS to it - it really is about publishing smarts (for the clustering, comma delimited, using "and" between the last pair with/without a comma etc.)
snarfedesp since you did a ton of research and actually care about UI and design, and i just did the dumbest thing and generally abdicated all responsibility :P
petermolnarthese by the way: DHE-RSA-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:AES128-GCM-SHA:AES128-SHA:DES-CBC3-SHA:TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA:!LOW:!MEDIUM:!RC2:!RC4:!SEED:!MD5:!aNULL:!eNULL:!EDH:!EXP:!ADH:!PSK:!DES;
Loqiemmak_: tantek left you a message on 12/19 at 5:25pm: do you have a feature you'd like to launch and start using on your site 2015-01-01? check /ownyourdata and /indiemark for some ideas.
Loqitantek: aaronpk left you a message 43 minutes ago: is your home page h-feed intentionally a child of your h-card? I haven't seen that pattern on anyone else's site before
tantekyes - intentional. the home page - tantek.com - *is* me - and I just happen to show a little recent updates feed as *part* of it (also side bar description, other profiles, etc.)
tantekI suppose I'll wait for someone writing reader functionality for their website to ask - if it would help them at all - or if they're just happy to do "first h-feed" discovery
aaronpkwhen I consume microformats I do so from the parsed object, so it's kind of hard to find the "first h-feed" because it may be a child of any object at any depth
aaronpkbut i'm not actually working on this right now. just happened to look at your home page to see how the likes ended up in the parsed mf2 version and noticed that
aaronpkthe parsed mf2 version of the page doesn't care what element a class is on, so I don't see the significance of putting the class on the <body> tag except for adding clarity for humans reading the source code
aaronpkthe pin13.net parser will just show you the parsed version of the page as a JSON document. it doesn't care about specific class names or mf2 objects
voxpelliI very much need to make my design responsive though and not sure how I'm going to adapt the box-style to phone screens – probably doesn't work as well there
GWGBoth Sempress and my theme are based on _s. Early on, it was suggested that an _s fork that was mf2 compliant be made. I did a little of that. Then I tried to get _s to go mf2 compliant. They closed the issue. Some I'm bringing back the project on my own
voxpelliAre there any other reliable options for retrieving avatars than a nickname-cache? Seeing a lot of broken Twitter-avatar on old posts in my WebMention endpoint
Loqitantek: emmak left you a message 2 hours, 1 minute ago: your likes appeared as a normal note in my reader, i don't have any special formatting for likes
tantekemmak - I wonder why - does your reader depend on particular h-entry fields being there? it should be able to read and show the likes without knowing about likes
snarfedat a higher level, it makes me sad that we here often seem to reject existing readers entirely, just because of the feed sidefile plumbing debate
snarfedi love how much we here prioritize use cases over plumbing - at least for indieweb sites, which are generally small data/traffic - so when we occasionally get worked up about the odd bit of plumbing best practice, like "no sidefiles," it catches me off guard
snarfedi guess i understand that skew and stale sidefiles result in suboptimal UX in feed readers. which is fair. still, weighed against the use cases existing readers already provide, and all…
voxpelli_snarfed: speaking of feeds btw – been involved in https://github.com/Reboot-RSS/reboot-rss ? Know Superfeedr-Julien and some from my employer Bloglovin has tried to look into how to move along RSS there
aaronpkSo no I don't actually think that existing readers offer me much which is why I'm not trying to solve it that way (converting mf2 to rss and using an existing reader)