snarfedaaronpk: thanks! btw, i still occasionally see this webmention.io error: "Incorrect string value: '\\xF0\\x9F\\x98\\x9E @...' for column 'name' at row 1"
tantek… if you're a new customer on Namecheap hosting, you get PHP 5.4, but if you're an existing customer they'll put you on PHP 5.3 and keep you there.
LoqiDigitalOcean is a web hosting provider targeted towards developers and offers low cost cloud servers in data centers across the world http://indiewebcamp.com/digital_ocean
dariusdunlapInteresting… IndieWebCamp.com auth failed on my domain when I input Dunlaps.net/Darius, but then succeeded when I specified https://Dunlaps.net/Darius.
aaronpki could change the property names to h-geo stuff relatively easily, but the problem is more with sending multiple location objects in a single request
dunlaps.netcreated /Small_Computers (+1163) "Created page with "{{stub}} These '''<dfn>Small Computers</dfn>''' are low-power computers that are useful as servers and special-use controllers for a variety of applications. They can all run we..."" (view diff)
tantekone of the other points of multivalued properties (which we haven not explored much but went into the design) is that per object semantics could indicate cross-property ties
tantekaaonpk, given that data and the practice you speak of ("think they are adding") I'd use a simple single character based string array since they are flags from a discrete set
reedstrmKartikPrabhu - right. You can split on anything you specify. The only magic is that the default is 'split on whitespace, remove empty results', so it's not as reversible.
reedstrmo.k this is where I bow out of the discussion, because to my eye, this is exactly the kind of data that a DB is good at, at the scales it's designed for.
reedstrmThe postresql team is _vewry_ conservative about that. THe on-disk format seldom changes. If it does, there are tools for for doing an in-place upgrade.
danlykeThe one time that didn't work, I fired up the old version, dumped it, fired up the new version and imported it. And I think it didn't work because I'd waited too many versions between distribution upgrades on that server.
reedstrmI'm running production systems that have ~ 60 GB in them on disk. Tey'be been through every major postgresql upgrade since version 7.<something>
aaronpki should add a note to my follow up article about how writing this data to separate plain text files allows me to sync it between machines using btsync
reedstrmaaronpk: you might be surprised. The main issue w/ an occasional disconnected replication like that is that you would need to set up the primary server
reedstrmI think a postgresql install using 'log shipping' replication would probably fit the bill pretty well. Then you can worry about the format for message passing around
reedstrmwell, that's your call. My experience is that my postgresql dbs take essentially no maintenance, other than making sure I don't acutally run out of disk space. Even then, it's stop, add more space, start, and it auto recovers. Now, the mysql db backing my home DVR, on the other hand ... I won't start :-)
tantekdepending on any one source-base / implementation is a bad long term design decision. your dependencies should be on open formats and protocols, not any one open source project.
tantekuntil someone else builds an implementation that can interoperate with PostGIS files, protocols, replication etc., PostGIS is just another monoculture vulnerability.
tantekand this is also why it's safer to longterm keep your data in the thing that is interoperable (e.g. the dumps) rather than the thing that is *only supported by one codebase* (actual database files)
danlyketantek, yeah, my experience with tab delimited SQL dumps (the last few I've looked at) is that they're easy enough to parse in Perl that that's as good an intermediate format as any.
tantekanyone involved in standards knows that that means you can't trust the supposed "open standards and formats" when there is only one implementation.
danlykeyeah, I think one of the reasons the PostgreSQL users tend to be less worried about it is that PostgreSQL has a culture of "we're implementing the SQL spec" which means interop with Oracle/SQLServer/etc (with GIS extensions) has long been possible.
tantekdanlyke - that sounds like they're pursuing a path of being compatible with at least one other implementation, and that *greatly* mitigates the monoculture risks
reedstrmBut as long there are robust export tools to open formats, the existing open source implementation doesn't matter. If it stops growing, gets taken private, whatever, your code doesn't stop working. We're not talking a webservice here.
tantekand if your'e backing up in "standard" export formats, but not actually re-using them, then you're taking *on faith* that the standard export formats actually maintain full fidelity of your data.
reedstrmRight, it's a balance - you've drawn a bright line in the sand, claiming that flat files are the one true way. I contend that there's essentially no difference between the software needed to read those files and the dreaded "monoculture" of a system like postgresql.
tantekbtw any reasoning in the form of, or any system that depends on "plenty of time to notice abc and do xyz" is form of future time-debt and not zero-cost. accrue enough of those and you reach failure. or death is failure, or passes on the future time-debt to someone else who may already have too much future time-debt of their own.
tantekreedstrm - historically there's been quite a big difference in longevity between storing things in open formats with multiple implementations, and storing things in a format (open or not) with a single implementation.
tantek"as much as" - nope. a single implementation solution means you have to pay attention to that implementation (time debt). text files, or any multi-implementation format, you can ignore the implementations (no additional time debt).
reedstrmThere is not a zero _current_ cost to your position, however. Innovation by it's very nature involves doing things that no one else is doing. Hence, single-implementation.
reedstrmI think we are actually both on the same side of the bell curve on this argument, actually. I just think you're throwing a lot of babies out with the bathwater curse of 'monoculture'
tantektwo things. 1) I've enough such "innovations" die that I have no desire to invest any time in them. too risky to be worth my time. perhaps that *is* a personal age/experience thing. 2) every time I've seen a "monoculture" style open source project/community dominate an area it evitably gets displaced by something else and dies. Every time.
aaronpkexample: firefox. say mozilla blows up and stops updating firefox. are you *really* going to start digging into the code to make updates to it when you want it to support new browser standards?
tanteksparverius - these monocultures often follow a path of feature bloat (due to popularity / dominance / enterprise), and then slowdown, and then displacement, neglect, slow quiet whimpering decline, and then death.
reedstrmI never said it would be free of cost. Everything cost something. I think you're paying daily costs that are avoidable, and compound to more, in my estimation, than the future cost of migrating the data. Which will need to be migrated anyway, multiple times (see: lifetime of media)
reedstrmaaronpk: if you want to eventually migrate this down to internet-of-things sort of level, were devices may have much less bandwidth to report their location (or storage to accumulate it), worrying about the bytes might make sense.
tantekwhether you use ActivityStreams or not, it's likely that if you're interested in indieweb-like things, it's worth your time to read the specs, for ideas, background, research if nothing else
tantekdisclosure: I'm a co-chair of the W3C working group that produced those, and I myself *do not* *yet* support ActivityStreams 2.0 on my own site (no dogfood from me yet)
tantekalso heads-up, I can't find Activity Streams 2.0 support on the editor's site either: http://www.chmod777self.com/ (apparent lack of selfdogfood)
aaronpkI am considering switching it to something like tantek suggested, a charmap. also because I should proabbly be storing the confidence of the motion as well.
tantekreedstrm: no implementations yet AFAIK. exactly the questions I've been asking. implementations have been promised by *next week* for W3C's TPAC meeting in Santa Clara, and the Social Web WG f2f meeting: https://www.w3.org/wiki/Socialwg#Face_to_face_Meetings
benwerdMeanwhile I was just invited to Tsu.co, which is a social network that splits advertising revenue with you. Lots of people dancing around what needs to be done, nobody quite going all-in.
tantekKartikPrabhu: yeah that thread makes it pretty clear how clueless Ello is and thus fairly ignorable (unless you see some actual UX innovations that are worth documenting)
reedstrmbenwerd: It's a long standing techblog from the early days of 'the new journalism' so, yeah, actual revenue. Seems to have a rep for getting scoops. Now if I could just remember the name ...
danlykereedstrm I've observed before that based on consumer behavior, privacy and control of personal information have negative value to most purchasers.
aaronpkhey bridgy people, do you know what would happen if many people created a facebook event with one of my event URLs at the end? would all the event RSVPs end up on my site?
Loqitantek meant to say: I'm not kidding about this. The Atom version of my posts costs me more bandwidth, so why shouldn't I pass that along to the consumers?
tantekbenwerd, but seriously, charging for complex formats seems like a good idea, especially when its enterprise (those with lots of $) who ask you to publish them.
ShaneHudson_My site needs $8m funding btw. Let me know if anyone finds someone 'clever' enough to see the ivestment opportunities, they seem to be everywhere in SV
Loqipayment in the context of the indieweb refers to a feature on an indie web site that provides a way for the visitor to that website to pay (currency, gift card credit, etc.) the person represented by that indie web site http://indiewebcamp.com/payment
tantekthe rest (payswarm web-payments etc.) is so far away from any kind of indieweb implementation that it's ignorable IMO - unless someone wants to try building it on their own site here?
reedstrmThere was a recent SMBC something like that - the economic analysis of 'the ultimate game' : I give you $100, w/ the stipulation that you have to offer some fraction of it (any fraction at all) to aaronpk. If he accepts your offer, you both keep it. Otherwise, deals off.
aaronpkone nice design decision of payswarm is this: "While this algorithm outlines how transfer records are transmitted and recorded in a decentralized fashion, it does not outline how each authority ensures that currencies are exchanged between financial institutions."
tantekyeah - like I said, IMO payswarm is ignorable because it's so far off in the weeds. if you find something useful, maybe document it on a /payment-research page
reedstrmclosely related phenomena: "While we have the wall open" during a home remodel. The "since I'm in this code anyway" refactor. Any one have others?
GWGreedstrm: Usually its...I'm calling in the electrician....do I need him to do anything else while he is here...because I get a better deal if I fill his time more effectively
GWGBut, my problem is that I want to add features, but I see this massive way it could go if I continue along the same path...except I want to find an intermediate point in case I don't.
GWGKartikPrabhu, it isn't quite that. It is more...I know I want to go from A to B...but A to B is big...I need to figure out how to work incrementally toward B, and figure out intermediate point C
GWGIf I get into profile pictures and author names...the eventual endgame for that is a complete profile of the author, because it leads into all sorts of other things in the future. That includes things like authentication, messaging, etc.
reedstrmConsider the 'stored in post' part a convenience cache for the full-bore profile. Implement it first. Ah what are you doing full profiles for others, anyway? Don't they have pages for that?
reedstrmAh got it. Well, I'm always in favor of the DRYer approach. Store a profile, but allow it to be partial. A miss in a cache just means go look it up.
reedstrmHad to code that for search results: author info on each returned item, but wanting facet filters for by author. Answer was to dry it out, return all the people as part of the result set, and ref. it in the actual item display.
GWGI want to have, for example, a tantek profile because I may respond to him regularly. But if I'm commenting on an article in a journal, the author is often superseded by the publication.
reedstrmo.k. simpler question, not topic. :-) Lacking any other info, if you have the modification date of the object, I scale cache time to the age: if they haven't updated it since 1992, it's likely to be good for a long while. Lacking that, 1 day/week/etc. (ie. pick a number)
GWGHmm...then the logical thing to do is write a function that retrieves the data from wherever it is stored. Then change where it is stored without changing the function that retrieves it
GWGreedstrm: I'm not much of a programmer in general. I've always understood code, but rarely wrote it. I've gotten back into it via Indiewebcamp, because I want to add some functionality, but it is mostly centered around metadata.
reedstrmThe way to be a programmer is to ... program. Surprisingly few of the people who go to school to learn to program read code. This confuses me. It's like authors who don't read.
GWGUnless you need a BASIC program written for the Commodore 64, or you want me to activate the secret 320x240 mode on a VGA adapter using C...I'm pretty out of date on a bunch of things
tgbrunGWG: I have a website and am sloooooowly implementing indieweb principles. I link my posts to facebook and twitter through nextscripts and am looking to get the comments and likes, etc. back.
tgbrunI also looked at syndication link and web comments plugins but I don't see how to set them up to find the comments, etc. I have subscribed to bridg.ly. Is there anything else I need to do?
tgbrunyes to semantic linkbacks and to semantic comments. Do these plugins play well together, I'm confused about which I should have and which don't work with others
snarfedtgbrun: are you tombruning.com? if so, looks like bridgy is failing because you currently requiring js and cookies for comments. that's probably due to a plugin; you'll need to loosen that
tgbrunGWG: now that I disabled and will uninstall spamshield, which did a great job of stopping spam (and useful comments as well) what would be a good way to stop spam?
bret!tell KartikPrabhu Your use case is turning your paper/book into another format, such as LaTeX, from the HTML/MF2 copy, and don't want to degrade the organizational structure
LoqiKartikPrabhu: bret left you a message 19 minutes ago: Your use case is turning your paper/book into another format, such as LaTeX, from the HTML/MF2 copy, and don't want to degrade the organizational structure
reedstrmwe're doing something similar, though not MF2 specifically - html5 but using CSS3 as the transform language, if you will. Currently targetting outputs to web, epub, and Pdf (via princexml, I'm afraid. Propriarty bit)
joskarI've spent the last half hour wondering why my code for accepting webmention comments wouldn't work. Turns out I was refreshing the wrong page :)
Loqij12t: benwerd left you a message 1 week, 5 days ago: Interesting feedback from timmmmyboy - would like packages vs an entire distribution (helps run Reclaim Hosting)
tantekj12t - I'd say it depends on when you want to give your things an identity with a separate security context (then (sub)domain), or part of an existing security context (your choice, like your indieweb of posts).
j12ttantek: so you are saying that if I go to Lowe's and pick up a thermometer, and to the Home Depot and Office Depot and so forth, when I get home I'll then proceed to enter them all in my GoDaddy control panel?
tantekanother particular concern of getting into the "indieweb of things" is privacy about the existence of your things, e.g. at home, which would imply that we may need to address allocation of (sub)domain based identities which don't leak outside your home.
danlykeie: local links happen as http://www.flutterby.com/resolve.cgi?id=[device-and-document-public-key] , there's something common to any site which uses this scheme to do resolution (so links on your site are only dependent on your server), but eventually browsers can use something like the Bittorrent blockchain to resolve that key. Releasing us from DNS tyranny.
LoqiAlgorithmic is a term often used to refer to data such as URIs or other identifiers. An identifier is algorithmic if there is a way to decode it into additional information, typically about the thing it identifies http://indiewebcamp.com/algorithmic
danlykej12t maybe... I'm trying to figure out how to give, say, my sister some sort of identity/server/document mapping that doesn't require her to pay an extra $10/year (and the attention to it), because that's part of the barrier to her moving off of Facebook/etc.
tantek4) if years down the road one of your stored links fails to return "the same thing" as before (e.g. change of rel=canonical, 404, etc.), go pull that URL out of one of the internet archives, *USING* the date embedded in the very URL itself as the time-window to look for it.
tantekBONUS: you can even use this technique to *verify* rel=canonical changes by retrieving the *archived* version as above, comparing it to a new rel=canonical / redirect destination, and if it's "similar enough", updating your storage of the canonical URL.
jet__, erlehmann_ and erlehmann joined the channel
danlykeso tantek, what portion of distributed URL de-shortening out there addresses my concerns about eventually being able to do that local to the browser?
tantek_What it does it make your rental of a domain name over a time period actually work for permalinks without having to rent the domain name for all time.
danlykethe harder part is that archive.org won't give me stuff because the domain changed hands and they interpret the new robots.txt (or whatever) as not allowing access to the old stuff.
danlyke(Got this when trying to retrieve Elf Sternberg's "Balkanize Usenet" manifesto from archive.org. I can see that they've got it, but he had it on his old ISP's web server and that domain is now some completely different company)
danlykegiven that the date portion is primarily useful from the standpoint of the linker (rather than the linkee), embedding that in my document somehow seems more important than trying to convince targets that somehow they should URL-date their documents.