cwebber2aaronpk: but it makes it more complicated on the server... tantek's on the right track in saying that if we add the ability to *just add* something
rhiaro... In both of these examples of the same update operation as form-encoded and json, the data structure if the request is the same, so you can convert between them
rhiaro... It means that clients and servers don't have to handle both formats so in theory it should be easier to both because there's fewer cases to handle
rhiaroeprodrom: There is an arguement from a consistency point of view.. i've been using form-encoded for creation why should I switch to json for update? That said, if real implementors are not saying this it makes sense to me that if there are two ways to do something and everyone is doing it one way, it's not necessary to support both ways
rhiaro... Publishing clients MUST support sending form-encoded requests and they may only publish, and never edit a posts. Whereas editing clients are going to support the full list of operations on posts, so they can just use json all the way through
rhiaroaaronpk: My current thinking with the media endpoint is that based on what I've seen with... github issues - if you drag a photo into an issue it uploads it right away and puts a url into the markdown. In these cases it seems like the url to the image is permanent, it is meant to be the actual location of the photo
rhiaro... The reason the spec should specificy it is if we want to be able to have someone create a media endoint service that clients and servers can expect to work a certain way
rhiaro... You can implement your own in your micropub endpoint,t hen it becomes an implementation detail. But if we want to support stand-alone media endpoints then clients and servers need to know how it will work
rhiarosandro: clients must do media endpoint discovery? they can't jus tpost it to the mmicropub endpoint? THe discovery thing concerns me. Seems like a whole complication
rhiaroaaronpk: its' a different issue. I like that for clients that only want to create posts they can just post a photo to the micropub endpoint. That's still in there in the form-encoded creating
rhiaro... One of the reasons for using a media endpoint at all was for user experience when you're putting multiple photos in a blog post. Also if you want to create a post with the json syntax you have to do two different posts
rhiarocwebber2: Media is specifically associated with a post in mediagoblin's case. You upload it and it ends up going through a step where it gets transformed by the processing to generate multiple resolutions of the file etc, and also associates that...
rhiaro... Even when the upload either succeeds or fails it still provides a better experience cos when it does succeed it's great. doesn't have to support partial upload to provide a better experience
rhiaro... The is this pattern that we're seeing implemented by lots of services, so it's useful to caputre that in the spec and encourage implementors to also follow that pattern
sandrotantek: The group is aware there's different vocab approaches at work. And has converged them in some places. But in our parallel approaches work mode, we don't see this as a blocking issue.
sandrocwebber2: maybe over the next few months it might be a fun experiement to see how far you can get crossing the vocabs and the pub protocols, but it might get us into trouble
sandrosandro: How about instead we just have each draft in a big box point to the other spec, "THis is one of two SOCIAL APIS from the socwg, with sltihgly different use cases and approahced, implementors should check ou tht eother one"
aaronpk"This is one of two client APIs being produced by the working group with slightly different use cases and approaches. implementers should check out and review the other approach here."
tantekaaronpk: if you have a document like a PDF that is restricted, then create a separate page that the document references, so that there's an actual page with the document's metadata
tantekaaronpk: or I could add something with the suggestion, if you have restricted / paid access content, you should create a landing page for that content that is public that has the links
tantektantek: I think the intent of this requirement was that the receiver at the target's domain knows that the target is a valid resource, like the page / redirect actually exists
tantekaaronpk: another example is perhaps a paid proxy that receives webmentions on behalf of others, and if someone's account expires, then the proxy would stop accepting webmentions on behalf of the target
sandroPROPOSED: Close webmention #40 with editorial revision clarifying that one should only look for HTML tag if content is HTML. Non-HTML resources MUST use the HTTP Link header for discovery. Each additional discovery mechanism imposes a cost on every sender, which we want to avoid.
sandroRESOLVED: Close webmention #40 with editorial revision clarifying that one should only look for HTML tag if content is HTML. Non-HTML resources MUST use the HTTP Link header for discovery. Each additional discovery mechanism imposes a cost on every sender, which we want to avoid.
tanteksandro: ok with may, some techniques include, setting right media types on your ACCEPT header, aggressively closing the connection if its a media type you don't know what to do with
eprodromPROPOSED: Add text to security considerations for Webmention to suggest using HEAD request during verification, AND add text to Verification section to suggest using Accept header
eprodromPROPOSED: Add text to security considerations for Webmention to suggest using HEAD request during verification, AND add text to Verification section to suggest using Accept header closing issue #46
eprodromPROPOSED: Add text to security considerations for Webmention to clarify that it allowed to use HEAD request during verification, AND add text to Verification section to suggest using Accept header closing issue #46
eprodromRESOLVED: Add text to security considerations for Webmention to clarify that it allowed to use HEAD request during verification, AND add text to Verification section to suggest using Accept header closing issue #46
rhiaro... Linking to implentationr eports, template, linking to test suite, submission process, change links to repo, adding a note about dropping features
rhiaro... Next steps there will be contacting the companies on that list, letting them know we're moving to CR and we'd like to get their implementation reports
rhiarotantek: I'm specifically looking to see what percentage of AS1 implementations (that are current - there are old ones that nobody has touched for years, dont expect those) to adopt AS2
rhiarosandro: just looking at the transition request for it, in reverse order: we should link to the implementations so far, which would at least be the empty implementation report repo
rhiaro... For wide review, I don't know about wide review for AS2. There's tons of github issues. Have we sent emails or announcements we can point to?
rhiaroaaronpk: this may not be related, but when we're trying to get people to implement AS2, what is the incentive for people who are not memers to implement the draft before it's an actual rec?
rhiaro... It's unlikely to change, but ifit's going to change.. i tey're oing to hit a fatal problem with it it's better to know that before it's to late to change it
sandro"Now it is such a bizarrely improbable coincidence that anything so mindbogglingly useful could evolve purely by chance that some thinkers have chosen to see it as a final and clinching proof of the non-existence of God. "
rhiaro... The only normative change to this since the last version is that more people have started publishing video posts so video got added to the algorithm
rhiaro... it references AS2 and AS2 vocab in informative explanations for, like examples. That's in the document itself, there's no summary that explains document relationship with AS2
rhiaroeprodrom: I feel like the abstract clearly says ... *reads abstract* ... so you odn't have a post type (check), you want to determine the type of that post (check) -> this is sthe algorithm to do it
rhiarocwebber2: one of the major things I was interested in this was, that makes it really useful to the group, especially with having mp and ap moving forward at the same time, is that it provides a bridge between the things we currently have in the group
tantek"Post type discovery helps provide a bridge between systems without explicit post types (e.g. Micropub, jf2) to systems with explicit post types (e.g. ActivityPub, Activity Streams)."
rhiarorhiaro: the vague language is not good for rec track, would be clearer how it's useful if it specifically used AS2 terms. eg. RSVP post doesn't exist in AS2
rhiarotantek: we could also have conformance classes like if you are an AS2 generating application you must generate the following objects from the following types
rhiaro... Various sites that do RSS feeds of their activities that have made stuff up. I can research to see if there's something I can add to post type discovery to make that more explicit
rhiaro... THe first version 0.3 had a number of interesting characteristics, one is that it only was defined for atom feeds. Another was that it had a kind of complicated set of roles; a publisher and subscriber, and then a 'hub' so you can set it up so the publisher and subscriber don't have to scale, but the hub does
rhiaroeprodrom: big changes in 0.4, communication between publisher and hub. Redefined how to do publication and subscription for things that aren't atom feeds
rhiaro... First was that when the open web foundation was first announced, google had announced that they would be putting a number of specs under the open web foundation patent license and so there are blog posts to that effect, but they never actually published the paperwork that says, signed at the bottom, this is under this patent
rhiaro... By the time that we started to be interested in this, and having it as a w3c spec, the peopel who worked on it were no longer working on it and there did not seem to be as much of an institutional interest in this kind of standardisation around feeds
rhiaro... Now we have some diversity of hubs and implementaiton experience and it seems like everyone's... people are using different hubs and publishing to differnet hubs, and everything seems to work. I don't think we've run into interop problems where your site can only go to one hub because of how it's implemented, or where a reader support consuming PuSH 0.4 and support consuming atom or h-feed real time via PuSH 0.4, seems to work with all of the hubs that
rhiaroaaronpk: the reason they all are working together is that the holes that were left in the spec we have all filled in the same way because of the tutorial on the indiewebcamp wiki
rhiarosandro: the press around it is all about fat pings, but indiewebcamp doesnt' use it for fat pings. There's no format defined for what a fat ping would look like
rhiaro... And if you go look at the section how to subscribe, it walks you through every part of the request, including receiving notifications, including separate sections for standard and fat pings
rhiarosandro: the takeaway from this description is that PuSH 0.4 by itself is not useful to us, but refined the way aaron has is useful for some subset
rhiaro... We have two or three options... we take the PuSH 0.4 and take it to soe sort of rec level right now and kind of steward it through that process
rhiaro... The other is that we take the PuSH 0.4, make an 0.5 that clarifies some of the things that we're doing, but maybe talks about what's specifically being used in the indieweb community
rhiaro... Third is that we don't do anything with it and accept that it's a community standard but that we don't necessarily have anything to add to it
rhiaroeprodrom: right, we could do something similar. When you do discovery you could do it for some other name, like not 'hub' it's 'publisher' or something
rhiaroeprodrom: google is a member of w3c, if we decided to publish a new version of this spec, part of tha tprocess would be a call for exclusions, which is they say they have ip considerations that would block publication of this spec
rhiarotantek: I would say that if we took on PuSH as a work item in this group whether called that or called something else, then if we successfully produced a rec, it would put it in a stronger .. or in a more implementable with less ip concern situation than we have today
rhiaro... The larger/first issue to resolve before the ip issue is that there was the CG, Julian still felt very strongly about editing and updating the spec, I think that were we to decide to go forward with it specifying the details we have figured out that allow interop woudl be a good thing, and I would not be comfortable having that gated on someone outside of the group
rhiarosandro: both what you said and the name, the right thing to do about the name is to ask the people who feel they have ownership of the old name, to see if they want us to call it PuSH 0.5 or name it a new thing
rhiarotantek: I woudl word it more strongly - hey we like the work you've done, we've continue trying to specify details, we would like to take that work and publish it with the same name with a new version number
rhiarotantek: I believe brad doesn't care... bret is happy to see anyone build on it... I think netiher one of them want to deal with talking to google's lawyers
rhiaro... Julian feels the strongest, he produced 0.4. If there's anyone we need good vibes from, make sure he knows and agrees with it happening, it would be Julian
eprodrom_, jasnell_, bengo_ and shepazu_ joined the channel
rhiaroeprodrom: Another objection... limited time, limited resources. I'm not going to edit this. I do'nt know who is. But we'd need to have someone step up and do it. We only have 7 months
rhiarotantek: sounds like what you're saying is if you struck down that path of a PuSH based system you're gonna end up stuck with public-only functionality
rhiaro... But helps at least capture the state of the art use of PuSH, for anyone who wants to know, here are implementations, if this is good enough for your use cases
rhiaro... On the resource thing, is maybe a step here to put the word out that if if someone is willing to take on the editorship we would be interested, or do we want to wait until Aaron has time?
rhiarocwebber2: if there is another place we can do it, I would prefer it. I'm commited to wrapping up my work and if that means I have to take a huge chunk out of my finances I will do it, but I would kind of prefer something less expensive
rhiaro... I think it's really important we have this meeting. This time is really important. This location.. but maybe this is the only reasonable time we'll do it. So I'm for it.
rhiarosandro: one of the main reasons for this location is if we get people wednesday, and talking to people during tpac, to try to bring in new blood and share. Some may stop by WG meeting