I don't think they are linked to anywhere but the url is http://www.youtube.com/feeds/videos.xml?channel_id=<channel_id>
echelon [3 hidden]5 mins ago
Google on several occasions took moves to make the web less semantic.
They dumped microformats and standards in favor of soupy error tolerant formats that benefitted their search engine and made it harder for other efforts to make information shareable and accessible.
They wanted it to be easy to get information in, but for you to have to go through them to get information out.
mplanchard [3 hidden]5 mins ago
I hand-rolled an atom feed for my statically generated blog. It’s a reasonable, easy format to work with.
eterevsky [3 hidden]5 mins ago
It was an alternative to RSS from 20 years ago that didn't catch on.
ravenstine [3 hidden]5 mins ago
I thought it did in fact catch on but most people still referred to it as "RSS".
I think it caught on well enough, platforms such as Wordpress still support it out of the box (I just checked my blog, it works).
I liked Atom's clean design but it felt it was mostly pushed by Google (I may be misremembering) and in the end the syndicated web faded into obscurity anyway.
brabel [3 hidden]5 mins ago
IIRC RSS 2.0 included most of what Atom has, no?
talideon [3 hidden]5 mins ago
Not really, and it's still more error-prone than Atom.
There's really no good reason to use anything other than Atom.
echelon [3 hidden]5 mins ago
IIRC, Aaron Swartz was one of the contributors to the format. RIP.
drob518 [3 hidden]5 mins ago
Well, that’s a blast from the past.
intrasight [3 hidden]5 mins ago
First iteration of Google's APIs were atom. I do miss XML.
abustamam [3 hidden]5 mins ago
One of the API providers I use at work returns responses in XML and we use an XML parser to parse it to JSON and even then it's not perfect.
What do you like about XML? I feel like I'm missing something.
deaddodo [3 hidden]5 mins ago
The main benefit of XML over JSON is that it is structured, and can be associated with Schema's for built in validation.
Obviously, that's only a benefit if you care about and utilize those features; most teams doing JSON integrations will just build those into the consumer in lieu of them being provided by the transport. But it is something that some people (especially larger enterprise organizations) value.
dolmen [3 hidden]5 mins ago
JSON is structured (not plain text to be analyzed by an IA).
JSON has JSON Schema.
In addition, JSON is easier to parse and to map to common data structures of programming languages.
theshrike79 [3 hidden]5 mins ago
XLM had DTDs and Schemas 20 years ago.
JSON is still figuring it out.
refulgentis [3 hidden]5 mins ago
I don't reach for it often but I've been around the block a bit, CC processors in the iPad point of sale I built circa 2010 used it and it seemed a bit off/unnecessary.
In retrospect, its useful for creating islands of sanity/enforcement in a codebase. Lightweight way to give type annotations across organizational boundaries.
> we use an XML parser to parse it to JSON and even then it's not perfect
I can't quite picture this: how does one parse XML to JSON? I assume there's code that's parsing XML and returning a JSON object? What would make this not perfect, other than a poor implementation of the translator? Would them using JSON help? If JSON is a less expressive format than JSON, is it possible to 100% translate their XML to JSON?
abustamam [3 hidden]5 mins ago
> useful for creating islands of sanity/enforcement in a codebase
Thanks for the insight! Is this what JSDoc/Swagger is now used for?
> I can't quite picture this: how does one parse XML to JSON?
I'm not sure actually. I haven't personally seen the code, I just hear my coworkers always lambasting that API provider for their usage of XML. Maybe it's just their lack of documentation that sucks, but it's become a running joke whenever we get a new partner that the team integrating it jokes that their API is XML.
tkcranny [3 hidden]5 mins ago
I’m not clear on the difference between atom and RSS. Atom seemed to be the better spec, but for my Astro blog I ended up sticking to the built in `rss` helper it ships with.
JimDabell [3 hidden]5 mins ago
In the beginning was RSS 0.x. It was originally intended to be based on RDF. Compromises were made and it ended up dropping the RDF. The spec. wasn’t very good and had several ambiguities.
Some people forged ahead with a cleaned up RDF-based version and called it RSS 1.0, while other people went ahead with the ambiguities but without RDF and called it RSS 2.0. The person publishing RSS 2.0 considered it finished and refused to update it. There was drama.
A bunch of people decided that there was too much to clean up from within that mess and started a new format, Atom. This ended up being a much better spec. with an official RFC, but at this point everybody was calling any type of feed “RSS”, even if it was Atom.
If you have the choice, you should pick Atom.
perrohunter [3 hidden]5 mins ago
what is old is new again?
hnlmorg [3 hidden]5 mins ago
No, this is just old.
Pity though. RSS / Atom was a fantastic concept and it’s a real pity big tech killed them off.
rambambram [3 hidden]5 mins ago
Nothing is killed. It still exists, it's an open protocol after all. And I choose to use it, it's pretty fun to calmly follow around 2000 feeds from - mostly - blogs from HN. And cars... I need my car blogs.
darreninthenet [3 hidden]5 mins ago
How do you curate and keep on top of so many feeds? I have ~10 on my RSS reader and I sometimes have trouble keeping up if I have a couple of busy days
Basically, I get to see the latest post from a random feed. Nothing else. No lists of unread new posts from all the feeds. If I like the title and short summary, I click through to the website or blog itself where I can read the whole thing. There's no FOMO this way, or an information overload. Just one post a time.
Because the whole list of feeds is curated by myself, I know that everything is at least a little interesting. I even made a category with Youtube channels that I like, so I can skip their annoying recommended videos algo.
Next to this basic functionality, I made what I call 'Newspapers'. These are certain topics with a bunch of selected feeds attached, they get checked automatically in the background. When the Newspaper has enough articles, I see a new Newspaper appear. Otherwise it might take months before a feed is shown in the random selection.
geodel [3 hidden]5 mins ago
Agreed. That nowadays people or even big companies find it outside their core competency to host their blog, have atom/RSS feeds is not because big tech killing it.
holistio [3 hidden]5 mins ago
Is there any platform for sharing what feeds we follow? Would love to discover some new blogs.
rambambram [3 hidden]5 mins ago
Well, my guess is that OPML is underrated. And I understand that, because it's so different from the social media that we are used to. On my homepage (link in bio) you can find all the feeds that I follow, available as an OPML file. It might be of interest to you, it might not (probably a lot of blogs you know from here, at least half of my 2000 feeds).
One 'dream' of me is to have OPML be the discovery-glue between all kinds of individual personal websites and blogs. But this requires critical mass to have enough to discover and explore, and it needs some fun/interesting software way to do that.
Lots of sites publish outages, incidents, downtime over RSS/atom. Works great for monitoring, post them into slack with a bot and you can start a discussion thread about that incident where you first hear about it.
bawolff [3 hidden]5 mins ago
Meh. Big tech didnt kill it off, it was already dead at that point. Sometimes things just arent popular no matter how much we might want it to be.
lolive [3 hidden]5 mins ago
Google Reader was uber popular at a time, then Google decided that syndication of articles, with comments, had to be an exclusive feature of their Facebook-esque Google+.
Dec 2005
I think at that time it was still ok?
The hyperscalers stopped that timeline from winning, though.
YouTube had atom feeds and I don't think Amazon and Microsoft have relevant syndication.
Meta is surely responsible but that's it, imo.
They dumped microformats and standards in favor of soupy error tolerant formats that benefitted their search engine and made it harder for other efforts to make information shareable and accessible.
They wanted it to be easy to get information in, but for you to have to go through them to get information out.
I liked Atom's clean design but it felt it was mostly pushed by Google (I may be misremembering) and in the end the syndicated web faded into obscurity anyway.
There's really no good reason to use anything other than Atom.
What do you like about XML? I feel like I'm missing something.
Obviously, that's only a benefit if you care about and utilize those features; most teams doing JSON integrations will just build those into the consumer in lieu of them being provided by the transport. But it is something that some people (especially larger enterprise organizations) value.
In addition, JSON is easier to parse and to map to common data structures of programming languages.
JSON is still figuring it out.
In retrospect, its useful for creating islands of sanity/enforcement in a codebase. Lightweight way to give type annotations across organizational boundaries.
> we use an XML parser to parse it to JSON and even then it's not perfect
I can't quite picture this: how does one parse XML to JSON? I assume there's code that's parsing XML and returning a JSON object? What would make this not perfect, other than a poor implementation of the translator? Would them using JSON help? If JSON is a less expressive format than JSON, is it possible to 100% translate their XML to JSON?
Thanks for the insight! Is this what JSDoc/Swagger is now used for?
> I can't quite picture this: how does one parse XML to JSON?
I'm not sure actually. I haven't personally seen the code, I just hear my coworkers always lambasting that API provider for their usage of XML. Maybe it's just their lack of documentation that sucks, but it's become a running joke whenever we get a new partner that the team integrating it jokes that their API is XML.
Some people forged ahead with a cleaned up RDF-based version and called it RSS 1.0, while other people went ahead with the ambiguities but without RDF and called it RSS 2.0. The person publishing RSS 2.0 considered it finished and refused to update it. There was drama.
A bunch of people decided that there was too much to clean up from within that mess and started a new format, Atom. This ended up being a much better spec. with an official RFC, but at this point everybody was calling any type of feed “RSS”, even if it was Atom.
If you have the choice, you should pick Atom.
Pity though. RSS / Atom was a fantastic concept and it’s a real pity big tech killed them off.
Basically, I get to see the latest post from a random feed. Nothing else. No lists of unread new posts from all the feeds. If I like the title and short summary, I click through to the website or blog itself where I can read the whole thing. There's no FOMO this way, or an information overload. Just one post a time.
Because the whole list of feeds is curated by myself, I know that everything is at least a little interesting. I even made a category with Youtube channels that I like, so I can skip their annoying recommended videos algo.
Next to this basic functionality, I made what I call 'Newspapers'. These are certain topics with a bunch of selected feeds attached, they get checked automatically in the background. When the Newspaper has enough articles, I see a new Newspaper appear. Otherwise it might take months before a feed is shown in the random selection.
One 'dream' of me is to have OPML be the discovery-glue between all kinds of individual personal websites and blogs. But this requires critical mass to have enough to discover and explore, and it needs some fun/interesting software way to do that.
Or you create a blog for yourself and you make a blogroll.
As for discovering new blogs, couple of options but there are more out there: https://ooh.directory, https://blogroll.org/