I've done this kind of thing many times with codex and sqlite, and it works very well. It's one prompt that looks something like this:
- inspect and understand the downloaded data in directory /path/..., then come up with an sqlite data model for doing detailed analytics and ingest everything into an sqlite db in data.sqlite, and document the model in model.md.
Then you can query the database adhoc pretty easily with codex prompts (and also generate PDF graphs as needed.)
I typically use the highest reasoning level for the initial prompt, and as I get deeper into the data, continuously improve on the model, indexes, etc., and just have codex handle any data migration.
cloudpeaklabs [3 hidden]5 mins ago
The DuckDB + Parquet approach mentioned in the comments is probably the sweet spot for most people exploring this data. I've built a few data pipelines where the initial instinct was to throw everything into Postgres, but for exploratory analysis on archives like this, being able to just point DuckDB at parquet files without any ingestion step is a huge time saver. Curious whether Modolap handles incremental updates well - the HN dataset grows daily, and keeping an analytical copy in sync without full re-ingestion is usually where these setups get painful.
Brajeshwar [3 hidden]5 mins ago
The “Hacker News - Complete Archive” on Hugging Face,[1] recently popped up here. “The data is stored as monthly Parquet files sorted by item ID, making it straightforward to query with DuckDB, load with the datasets library, or process with any tool that reads Parquet.”
Out of curiosity, I tinkered with it using Claude to see trends and patterns (I did find a few embarrassing things about me!).
I don't quite understand how Modolap differs from just asking AI to use any other OLAP engine? Both your website and the github readme just emphasise that it's idiosyncratic and your personal approach, without explaining what that is or why anyone should care.
ronfriedhaber [3 hidden]5 mins ago
Appreciate the feedback. I shall certainly revamp the README; it is rather stale.
> "how Modolap differs from just asking AI to use any other OLAP engine"
There presently exist two components, the OLAP query engine and the remote infrastructure service. The service enables systems like Codex (or developers as well) to manage datasets, maintain version control over queries, and offload the computational burden to dedicated machines. This is especially beneficial given the current trend of running agents inside micro-VMs.
In addition, it is designed with AI usage in mind. There is significant value in co-design. One could argue that models can use Polars or DuckDB just as well, and that there is no room for improvement, but I do not think this is true.
bastawhiz [3 hidden]5 mins ago
What room for improvement is there?
esafak [3 hidden]5 mins ago
I don't get the value proposition either; your landing page is underdeveloped. Tracking the query history is trivial. Offloading computation could be done with Polars Cloud or MotherDuck. Can you expand on the "manage datasets" part?
sd9 [3 hidden]5 mins ago
I'm kind of surprised that postgres was quite that dominated by mongodb back in the day. I remember the mongo fever, but I always thought postgres held reasonable market share. I guess it was other SQL dbs back then, I guess MySQL was still viable.
tombert [3 hidden]5 mins ago
It could be that Postgres was so popular that people didn't really discuss it.
Hyperbolic example; literally every human reading this consumes oxygen nearly every moment of the day, and as such no one talks about how great breathing is.
hsuduebc2 [3 hidden]5 mins ago
I worked on many projects that had used wrongly mongo instead of ordinary relation database and they needed rework in time. It was just hyped in it's days. Like micro service architecture in few years.
sam-bee [3 hidden]5 mins ago
Nobody who actually codes in that language ever calls it 'Golang'
hakrgrl [3 hidden]5 mins ago
That last chart showing the average comment length shows a clear negative downtrend, especially in recent months. I wonder why that is.
hakrgrl [3 hidden]5 mins ago
I noticed some topics and comments that were usually in violation of HN guidelines are no longer flagged, and discourse decays into reddit-like jabs and echo chambers. Only a small percent, but still, more than the previous 0% I was accustomed to.
When searching for references to Go, what does it actually look for? "Go" is a relatively common word, and I hardly see anyone referring to it as Golang
xnorswap [3 hidden]5 mins ago
5% of all comments mention Claude code?
Am I reading that right?
SyneRyder [3 hidden]5 mins ago
I don't think it's a percentage of all comments. I think it's either the percent of articles (topics) posted to the site that are about Claude Code, or maybe a percentage of topics where at least one person mentioned Claude Code in the comments.
The latter seems easier to achieve. To borrow from another internet rule of thumb: "As an online discussion grows longer, the probability of someone mentioning Claude approaches one."
xnorswap [3 hidden]5 mins ago
I think that latter interpretation must be right, it seems possible, if high.
Either raw comments or submissions doesn't seem credible.
epaga [3 hidden]5 mins ago
Well now it's 5.00001%.
hsuduebc2 [3 hidden]5 mins ago
I really love codex. The price/value comparison to claude code is at least from my opinion much better.
moralestapia [3 hidden]5 mins ago
Do not estimate/plot DAUs/MAUs, it's not a pretty picture :'(.
hakrgrl [3 hidden]5 mins ago
Why do you say that?
throwaway290 [3 hidden]5 mins ago
HN data is open? Under what conditions it's distributed?
bastawhiz [3 hidden]5 mins ago
There's an API link at the bottom of every page.
GeoAtreides [3 hidden]5 mins ago
no conditions! what, you didn't know your content would be free for LITERALLY EVERYONE when you made your account? and that you can't delete your comments? and it's a free for all? well, that's on you buddy, not HN, definitely not HN in any way
hsuduebc2 [3 hidden]5 mins ago
There are even datasets created by users over time and shared publicly.
- inspect and understand the downloaded data in directory /path/..., then come up with an sqlite data model for doing detailed analytics and ingest everything into an sqlite db in data.sqlite, and document the model in model.md.
Then you can query the database adhoc pretty easily with codex prompts (and also generate PDF graphs as needed.)
I typically use the highest reasoning level for the initial prompt, and as I get deeper into the data, continuously improve on the model, indexes, etc., and just have codex handle any data migration.
Out of curiosity, I tinkered with it using Claude to see trends and patterns (I did find a few embarrassing things about me!).
1. https://huggingface.co/datasets/open-index/hacker-news
> "how Modolap differs from just asking AI to use any other OLAP engine"
There presently exist two components, the OLAP query engine and the remote infrastructure service. The service enables systems like Codex (or developers as well) to manage datasets, maintain version control over queries, and offload the computational burden to dedicated machines. This is especially beneficial given the current trend of running agents inside micro-VMs.
In addition, it is designed with AI usage in mind. There is significant value in co-design. One could argue that models can use Polars or DuckDB just as well, and that there is no room for improvement, but I do not think this is true.
Hyperbolic example; literally every human reading this consumes oxygen nearly every moment of the day, and as such no one talks about how great breathing is.
Would be interesting to see how many comments violate the guidelines over time. https://news.ycombinator.com/newsguidelines.html
Am I reading that right?
The latter seems easier to achieve. To borrow from another internet rule of thumb: "As an online discussion grows longer, the probability of someone mentioning Claude approaches one."
Either raw comments or submissions doesn't seem credible.