#dev 2018-06-24

2018-06-24 UTC
[eddie] joined the channel
#
[grantcodes]
Yay! Refs show up in Together now! Looks great with my twitter feed 🙂
#
Loqi
😄
#
aaronparecki.com
edited /shirts (+1) "new url"
(view diff)
#
aaronparecki.com
edited /swag (+1) "new url"
(view diff)
renem, [nate658] and [eddie] joined the channel
#
[eddie]
ohh cool!
jjuran and [cleverdevil] joined the channel
#
[cleverdevil]
Nicely done [grantcodes]! This makes Together a much better Twitter client 😀
#
[cleverdevil]
I’d like to debug the PWA issue on iOS 12 at IWS.
#
[cleverdevil]
I’m tempted to try and whip up the equivalent of an Electron app for iOS with Together embedded to avoid the PWA issue :P
#
dougbeal
What is PWA
#
Loqi
Progressive Web App (PWA) is a web site that a client can progressively enhance into a standalone app that's comparable with a native app https://indieweb.org/PWA
[nate658], iasai, snarfed, KartikPrabhu, barpthewire, [jgmac1106], [wiobyrne], tantek__, [snarfed], [anika] and sebsel joined the channel; mblaney left the channel
#
aaronpk
I just learned some cool tricks with letsencrypt
#
aaronpk
their wildcard certs require DNS based validation, which is a bit trickier since it requires that the client have API access to write DNS records
#
aaronpk
rather than set that up for each domain, you can create a CNAME record from the domain where you want the wildcard cert to a sort of "control domain" that will answer the challenges for every domain
#
aaronpk
then you can run a standalone DNS server on that domain, and someone wrote one that has a simple HTTP API for adding entries, which of course are then available immediately
#
Zegnat
I might ping you re that on tuesday, if you have a couple of minutes, aaronpk. I would like to setup a wildcard cert on zegnat.net so I can stop creating new certs for every experiment I run on the subdomains.
#
aaronpk
this would make a good blog post
#
Zegnat
already uses aaronpk’s other LetsEncrypt blog post whenever he needs to generate a new cert.
#
aaronpk
oh yeah I need to publsih an updated version of that for certbot
#
aaronpk
tho it's basically the same
#
Zegnat
it is the same, just some name changes
#
aaronpk
looking forward to deleting a bunch of subdomain certs once I get this wildcard stuff sorted out
[anika] joined the channel
#
schmarty
Ooh this sounds like excellent automation
#
aaronpk
i'm kind of jealous of pstuifzand for starting out his microsub server using Redis instead of a relational DB
sebsel, jjuran, chrisaldrich, [eddie] and [jgmac1106] joined the channel
#
[jgmac1106]
@zegnat Adam Ginning, a Swedish hockey player drafted by the squad I follow just took your top spot of favorite Swede
[chrisaldrich], [dougbeal], renem_, [anika], [nate658], [wiobyrne] and pstuifzand joined the channel
#
pstuifzand
aaronpk, why is that?
#
pstuifzand
aaronpk, re: microsub and redis
#
aaronpk
i'm trying to prep aperture for a public launch, and as part of that I actually don't want to store data permanently for a few reasons. Kinda wish I could take advantage of the built-in expiration of keys and the nice list mechanisms that Redis provides for that
#
pstuifzand
I was also thinking about that, but perhaps the keys expiring is not enough
#
aaronpk
depends on the data structures
#
pstuifzand
It seems in SQL you can DELETE FROM `items` WHERE DATE_SUB(NOW(), INTERVAL 7 DAYS) < `created`
#
aaronpk
yeah but in practice it's a bit more complicated :)
#
aaronpk
I have a single `entries` table that stores the post's JSON data, and a separate table that maps entries to channels
#
aaronpk
so i'm removing things from the mapping table that were added > 7 days ago, but that potentially leaves orphaned entries in the entries table
#
aaronpk
only for entries that were only added in just one channel
#
aaronpk
also when I remove an entry, I have to go delete associated files and the file mappings before I can delete the entry
#
pstuifzand
I'm not sure, but couldn't you add foreign keys with cascade delete?
#
pstuifzand
of course that doesn't help with files
#
aaronpk
does cascade delete work that way? it's the inverse of the normal examples given for it
#
pstuifzand
I would have to think about it some more
#
aaronpk
normally you have a "buildings" table where each record has many "rooms" and when you delete from the buildings table it deletes all associated rooms
#
aaronpk
but in this case, it's like deleting the building if all the rooms have been deleted
#
aaronpk
in any case, like you said it wouldn't work with cleaning up the files. so essentially what i've got now is that cascade delete built in software hooks so that I can delete files.
#
pstuifzand
DELETE e FROM entries e LEFT JOIN entry_channel ec ON e.entry_id = ec.entry_id GROUP BY entry_id HAVING COUNT(e.entry_id) = 0;
#
pstuifzand
LEFT should be INNER
#
aaronpk
yes I have that too :)
#
aaronpk
but again I need to first find those entries so I can delete their files
#
aaronpk
it just seems like this is not the ideal way to implement intentionally temporary content
#
pstuifzand
I'm waiting for STREAMS feature that will be available in Redis 5.0
#
pstuifzand
It seems stream = channel
[grantcodes] joined the channel
#
aaronpk
also now one of my queries is taking 10 seconds to run even though it should be using the primary key index.
#
aaronpk
can't figure out why it won't use the index
#
pstuifzand
You could also fill a table with all deleted entry_ids and have a cron job deleted all files related to the entries
#
pstuifzand
did you EXPLAIN the query?
#
aaronpk
same when I write it with a JOIN instead
#
pstuifzand
did you include the media_id = 342 in the join?
#
sknebel
if you want to generally delete everything after X days, I'd just organize the file storage per day?
#
aaronpk
I am making this more complicated for myself, because I want to be able to change the threshold of how long before stuff gets deleted *per user*
#
pstuifzand
does it help if in the join case you change WHERE to AND ?
[eddie] joined the channel
#
pstuifzand
can you create and index with columns swapped? media_id, entry_id
#
aaronpk
would that help? it's the entries.id index that it should be using for the join I thought
#
pstuifzand
yeah it should, the theory is that I can't use both indices, because the media_id is in the second column
#
aaronpk
tries in a test db
#
aaronpk
ooh it does
#
pstuifzand
is it faster with the columns swapped in the index?
#
aaronpk
yep, now it's using the index
#
aaronpk
pstuifzand++ so much faster now :)
#
Loqi
pstuifzand has 2 karma
#
pstuifzand
very nice
#
aaronpk
considers switching to an image proxy instead of actually downloading these to avoid this problem entirely
[schmarty] joined the channel
#
[schmarty]
imageproxy++
#
Loqi
imageproxy has 1 karma
trip_, [jgmac1106], gRegor-mobile, KartikPrabhu and [grantcodes] joined the channel
#
Zegnat
This dev project is going to be expensive. Different Puppeteer projects I want to test all download their own version of Puppeteer, with its own embedded version of Chromium. That’s way too much data... I need to move to developing in the cloud rather than locally, I guess
snarfed and iasai joined the channel
#
@rubygems
webmention-endpoint (0.1.0): Discover a URL’s Webmention endpoint. https://rubygems.org/gems/webmention-endpoint
(twitter.com/_/status/1011013511047254016)
mblaney_, [tantek] and iasai joined the channel
#
aaronparecki.com
edited /Microsub (+8) "emphasis"
(view diff)
iasai joined the channel