I took a look at this... and you use the Ollama API behind the scenes?? Why not use an OpenAI compatible endpoint like the rest of the industry?
Locking it to Ollama is stupid. Ollama is just a wrapper for llama.cpp anyways. Literally everyone else running LLMs locally- llama.cpp, vllm (which is what the inference providers use, also I know Deepseek API servers use this behind the scenes), LM Studio (for the causal people), etc all use an OpenAI compatible api endpoint. Not to mention OpenAI, Google, Anthropic, Deepseek, Openrouter, etc all mainly use (or at least fully supports, in the case of Google) an OpenAI compatible endpoint.
dewey [3 hidden]5 mins ago
> Locking it to Ollama is stupid.
If you don’t like this free and open source software that was shared it’s luckily possible to change it yourself…or if it’s not supporting your favorite option you can also just ignore it. No need to call someone’s work or choices stupid.
CaptainFever [3 hidden]5 mins ago
You could contribute an option!
zxcvgm [3 hidden]5 mins ago
Cool, looks like text highlighting is a new addition in 2.10. There aren't any examples in the demo site of this, but can it capture the highlighted text snippets and show them in the link details page? That would help me recall quickly why I saved the link, without opening the original link and re-reading the page. I haven't really seen this in other tools (or maybe I just haven't looked hard enough), except Memex.
daniel31x13 [3 hidden]5 mins ago
> There aren't any examples in the demo site of this
This is because we haven't updated the demo to the latest version.
> but can it capture the highlighted text snippets and show them in the link details page?
That's a good idea that we might implement later, but at the moment you can only highlight the links[1].
Great product! Does it handle special metadata like https://mymind.com/ does, eg. showing prices directly in the UI if the saved link is a product in a shop? If not, things like that would be a great addition!
touristtam [3 hidden]5 mins ago
Site note: When a website advertising a product does a bad job at optimising the loading of the page, that's usually a red flag for me; yes that website has noticeable jitter when scrolling up and down even though it _only_ load around ~70Mb worth of assets initially.
smcin [3 hidden]5 mins ago
(The historical price on the day the link was published, or the current price, or over a date range, or configurable? I see different use-cases)
ryan29 [3 hidden]5 mins ago
I'd be interested to hear your thoughts on having a PWA vs regular mobile apps since it looks like you started with a PWA, but are moving to regular apps. Is that just a demand / eyeballs thing or were there technical reasons?
daniel31x13 [3 hidden]5 mins ago
Mostly the UX it provides. PWAs are a quick and easy way to support mobile but the UX is nowhere near as good a traditional mobile app…
thm [3 hidden]5 mins ago
I have about ~30k .webarchive files — is there a chance to import them?
- Does the web front end support themes? It’s a trivial thing but based on the screenshots, various things about the default theme bug me and it would be nice to be able to change those without a user style extension.
- Does it have an API that would allow development of a native desktop front end?
daniel31x13 [3 hidden]5 mins ago
> Does the web front end support themes?
Yes[1].
> Does it have an API that would allow development of a native desktop front end?
How difficult would it be to import an existing list of links/tags? Also, if I were using a hosted version, would I be able to eg insert/retrieve files via an API call?
I ask because currently I use Readwise but have a local script that syncs the reader files to a local DB, which then feeds into some custom agent flows I have going on on the side.
daniel31x13 [3 hidden]5 mins ago
> How difficult would it be to import an existing list of links/tags?
Pretty easy if you have it in a bookmark html file format.
> Also, if I were using a hosted version, would I be able to eg insert/retrieve files via an API call?
Ahh, yes, you can reduce it to names with a lot of columns. In my personal ideal, I've love to store a short-name for a link and have no boxes. Personally, I've always wanted links to be like the tag cloud in pinboard and to have a page with multiple tags/categories.
I'd also love a separation of human tags and AI tags (even by base or stem), just in case they provided radically different views, but both were useful.
EDIT:
Just did a quick look in the documentation, is there a native or supported distinction between links that are like bookmarks and links that are more content/articles/resources?
colordrops [3 hidden]5 mins ago
Could still be a lot more compact. Would also like the hierarchical view in the main pane.
In any case, nice project, thank you.
colordrops [3 hidden]5 mins ago
Came here to ask for exactly this.
Tsarp [3 hidden]5 mins ago
Curious if the the paid tier helps support development of the project
daniel31x13 [3 hidden]5 mins ago
Definitely! :)
dikdok [3 hidden]5 mins ago
> Full page archival
Does it grab the DOM from my browser as it sees it? Or is it a separate request? If so, how does it deal with authentication?
daniel31x13 [3 hidden]5 mins ago
So there are different ways it archives a webpage.
It currently stores the full webpages as a single html file, a screenshot, a pdf, a read-it-later view.
Aside from that, you can also send the webpages to the Wayback Machine to take a snapshot.
To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
dikdok [3 hidden]5 mins ago
> To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
Just an image? So no full text search?
warkdarrior [3 hidden]5 mins ago
> To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
It'd be awesome to integrate this with the SingleFile extension, which captures any webpage into a self-contained HTML file (with JS, CSS, etc, inlined).
daniel31x13 [3 hidden]5 mins ago
We might add this, it's actually highly suggested by the users :)
yapyap [3 hidden]5 mins ago
Very very neat!
a question arose for me though: if the AI tagging is self hostable as well, how taxing is it for the hardware, what would the minimum viable hardware be?
daniel31x13 [3 hidden]5 mins ago
Thanks! A lightweight model like the phi3:mini-4k is enough for this feature.[1]
It’s worth mentioning that you can also use external providers like OpenAI and Anthropic to tag the links for you.
I've been using Karakeep (formerly known as Hoarder) and it's been a great experience so far. One thing they're working on now is a Safari browser extension. I noticed Linkwarden lacks a Safari browser extension - is one on the roadmap?
Lately I've been using MacOS and I've noticed Chromium-based browsers use more resources than the native Safari. This is especially true with Microsoft Edge, which sometimes consumes tens of gigabytes of RAM (possibly a memory leak?). In an attempt to preserve battery life and SSD longevity, Safari is now my go-to browser on MacOS.
InsideOutSanta [3 hidden]5 mins ago
I'm also using Karakeep. It also has LLM-powered tagging, which, in my experience, works excellently. It's easy to self-host, fast on a relatively underpowered NAS, and I love the UX. Highly recommended.
Linkwarden looks nice, too, but when picking an option, I wanted one with a native Android app.
piyuv [3 hidden]5 mins ago
I chose linkwarden after seeing hoarder’s native iOS app
evanjrowley [3 hidden]5 mins ago
Bitter irony is that the one with the best iOS app is lacking a Safari extension, while the one with a mediocre iOS app already has a beta Safari extension.
rrgok [3 hidden]5 mins ago
I tried with the demo, but full content search does not work.
I don't know if the demo is randomly generated, anyway this is the test I did.
the trap of using indexation and a database when normal grep would work with disk-based storage
gibibit [3 hidden]5 mins ago
Is there any software that can provide verified, trusted archives of websites?
For example, we can go to the Wayback Machine at archive.org to not only see what a website looked like in the past, but prove it to someone (because we implicitly trust The Internet Archive). But the Wayback Machine has deleted sites when a site later changes its robots.txt to exclude it, meaning that old site REALLY disappears from the web forever.
The difficulty for a trusted archive solution is in proving that the archived pages weren't altered, and that the timestamp of the capture was not altered.
It seems like blockchain would be a big help, and would prevent back-dating future snapshots, but there seem to be a lot of missing pieces still.
Thoughts?
shrinks99 [3 hidden]5 mins ago
Webrecorder's WACZ signing spec (https://specs.webrecorder.net/wacz-auth/latest) does some of this — authenticating the identity of who archived it and at what time — but the rest of what you're asking for (legitimacy of the content itself) is an unsolved problem as web content isn't all signed by its issuing server.
In some of the case studies Starling (https://www.starlinglab.org/) has published, they've published timestamps of authenticated WACZs to blockchains to prove that they were around at a specific time... More _layers_ of data integrity but not 100% trustless.
gibibit [3 hidden]5 mins ago
Very informative, thanks!
dj0k3r [3 hidden]5 mins ago
Take a look at singleFile - a project that lets you save the entire webpage. It has an integration for saving the hash if the page on a Blockchain. You can choose to set it up between parties who're interested in the provenance of the authenticity.
bravura [3 hidden]5 mins ago
I want LLM accessible bookmarks. That's it.
It doesn't work yet.
I use singlefile to archive pages I'm viewing Linkding.
Then I have a BeautifulScript4 script to strip the assets.
As a paid product, has anyone used Raindrop as well and have opinions/comparisons? And on the self hosted side, vs Hoarder?
I’ve been considering switching from Raindrop to a self hosted option, but while I like self hosting I’m also leaning towards just paying someone to handle this particular service for me.
spiffotron [3 hidden]5 mins ago
I used to use raindrop however found it a bit bloated with features I never use, I've switched to selfhosting linkding: https://linkding.link and enjoy the much more minimal experience
exhilaration [3 hidden]5 mins ago
I've never heard of raindrop and it looks cool but I see the .ru in one of their screenshots -- are they based in Russia? Any concerns with doing business with a Russian company, in the context of sanctions etc.?
mwnivek [3 hidden]5 mins ago
Rustem Mussabekov on 24 Oct 2023 wrote:
"I'm founder of Raindrop.io. I'd like to clarify information about the origin of myself and the project. While I did live in Russia for a long time and initially started Raindrop there, I relocated to my motherland, Kazakhstan, shortly after the war began. I also moved all financial and business matters there.
I am no longer associated with Russia in any way. It would be great if this information could be added to the article."
I like hoarder(karakeep). It's got an API and mcp server as well to play with now locally and self hosted. I'll check this out as well.
dennisy [3 hidden]5 mins ago
I love these sorts of apps, but I still am not really sure why I need the webpages. At any time I do research for a topic I find more things than I can read in that session, so what are the old links for?
I would love to hear how people use this product once they have stored the links!
ryan29 [3 hidden]5 mins ago
I've used https://historio.us since 2011 and still pay for it to keep access to all the pages I've archived over the years. The price has been kept low enough that I can't bring myself to cancel it even though I've been using self-hosted https://archivebox.io/ for the last few years.
I always include an archived link whenever I reference something in documentation. That's my main use at the moment.
However, I also feel like I've gotten a lot of really good value when trying to learn a new development topic. Whenever I find something that looks like it might be useful, I archive it and, because everything is searchable, I end up with a searchable index of really high quality content once I actually know what I'm doing.
I find it hard to rediscover content via web search these days and there's so much churn that having a personal archive of useful content is going to increase in value, at least in my opinion.
touristtam [3 hidden]5 mins ago
How much space is the self-hosted solution taking? I've been meaning to try and find a better way to look through my bookmarks since no browser is capable of doing that properly it seems.
Lammy [3 hidden]5 mins ago
I haven't tried Linkwarden (still doing the `wget --mirror` thing myself), but one of the reasons I like archiving pages is so I can have a collection of pages that work in older browsers on vintage computers. I pop open View Source on any site I find that looks even vaguely old, and if I see a DOCTYPE up to and including XHTML 1.1 I archive that shit immediately even if it's not a site about any of my biggest interests lol
agnishom [3 hidden]5 mins ago
I recently started using Hoarder for this; once Omnivore went down under...
Looks really neat. But it also seems a bit heavyweight (for the client). Is it the case compared to https://readeck.org/en/ ?
raybb [3 hidden]5 mins ago
Just wish it had offline support. That's really the main use case for me is when I'm traveling and have spotty internet. Read articles offline and hopefully add some to the queue to be saved when I'm online again.
daniel31x13 [3 hidden]5 mins ago
We’re working on an official mobile app[1], which will most likely include this feature sometime after its launch :)
An official app with that sounds great! From what you know, would it be possible to also have offline support with the PWA?
csdvrx [3 hidden]5 mins ago
Will the offline mode work on laptops?
FireInsight [3 hidden]5 mins ago
No experience with this yet, but looking to upgrade from Linkding. Main features I'm looking forward to is syncing the bookmarks with native browsers bookmarks through Floccus, and being able to make highlights on the articles I save.
ibaikov [3 hidden]5 mins ago
Recently started selfhosting it. I like it. I tried hoarder, but it was overcomplicated and consumed way more resources. Now it got MCP, so I might use it with n8n, we'll see.
A couple improvements I'd like:
I want drag-and-drop link saving.
If I add a reddit link, it doesn't import the reddit thread title, it uses reddit's title in linkwarden (Reddit - the heart of the internet). Same goes for a few other websites like gitlab.
I'd like an MCP.
Resource usage optimization: while it is smaller than karakeep/hoarder, for me it consumes 500-950MB ram, and I have only 500 links added.
I‘m a heavy user and really happy with the speed and stability I‘m getting, running Linkwarden on my Hetner VPS. Only problem was in the beginning, importing a lot of existing links from Pinboard, the available RAM of my meager VPS was exceeded multiple times by metadata resolution. But once that‘s been overcome, it’s a zero effort tool.
Started using it a while back. Works rather well, even though some minor UX quirks exist. Self-hosting is easy, too, with Docker Compose. If you're in the market for a web-accessible bookmark manager, maybe give it a go!
salynchnew [3 hidden]5 mins ago
Very cool project!
QQ for users: How is the UX compared with ArchiveBox?
sloped [3 hidden]5 mins ago
This looks nice, I like how many of these tools have been surfacing. I recently started using https://readeck.org/, which aims to solve some of the same problems and really like it. Much better than a "bookmark" tool for things like articles.
My two favorite parts of Readeck are:
- it provides a OPDS catalog of your saved content so you can very easily read things on your e-book reader of choice. I use KOReader on a Kindle and have really enjoyed reading my saved articles in the backyard after work.
- you can generate a share link. I have used this to share some articles behind paywalls with friends and family where before I was copying and pasting content into an email.
ijustwanttovote [3 hidden]5 mins ago
I'm paying for readwise, any benefits of switching over to this?
borg16 [3 hidden]5 mins ago
it's a third of the price to begin with. I think readwise has a winner in reader app, but they sure do charge a premium for the same. You can get the same functionality in linkwarden or pinboard for a fraction of readwise's subscription pricing.
carterschonwald [3 hidden]5 mins ago
Is there a way to import pinboard or similar data?
daniel31x13 [3 hidden]5 mins ago
Yes, you can import any kind of bookmarks html files.
There are also other importing formats we do support as well like Wallabag, Omnivore, etc…
human_llm [3 hidden]5 mins ago
This looks interesting. How feature-crippled is the self hosted version?
dugite-code [3 hidden]5 mins ago
Not at all as far as I am aware. I use floccus to sync my bookmarks to it and it does the job quite well
xnx [3 hidden]5 mins ago
I have yet to find anything that has the effort vs. results benefit of CTRL+S -> "Webpage, Single File (*.mhtml)". Even works on mobile.
FireInsight [3 hidden]5 mins ago
Tagging, full-text search, page highlights, a nice UI,... You might call that bloat, I don't. Besides, I could not find any equivalent to ctrl-s the webpage on mobile Firefox.
xnx [3 hidden]5 mins ago
> I could not find any equivalent to ctrl-s the webpage on mobile Firefox.
True. There used to be an extension that enabled the hidden code path, but that stopped working years ago. I switched to Kiwi browser.
belter [3 hidden]5 mins ago
As of this moment...This post has 4 points and 2 comments...How does it make to number 3 on HN page?
A4ET8a8uTh0_v2 [3 hidden]5 mins ago
Velocity. Obviously, I don't really know and speculating only. Still, the project does look nice. I personally use archivebox, but I will admit this looks a lot more polished.
Magma7404 [3 hidden]5 mins ago
[flagged]
Vinz_ [3 hidden]5 mins ago
The first version was released in July 2023, I don’t see why this would shut down in two months.
This comment makes a lot of assumptions but does not substantiate any of them, please refrain from dunking on other peoples projects like this, it does not contribute anything to the discussion.
bovermyer [3 hidden]5 mins ago
Jeez man, who crapped in your Cheerios this morning? I hope your day gets better.
I understood an open source project need revenue to survive, but the reason why this project grew so large is because of the self-hostable nature, and the push of the cloud offering is the opposite of that.
I really hope this is not the first steps towards enshittification...
ctxc [3 hidden]5 mins ago
Nah, I just see this as a sustainable way to keep the project alive :)
Some key features of the app (at the moment):
- Text highlighting
- Full page archival
- Full content search
- Optional local AI tagging
- Sync with browser (using Floccus)
- Collaborative
Also, for anyone wondering, all features from the cloud plan are available to self-hosted users :)
https://docs.linkwarden.app/self-hosting/ai-worker
I took a look at this... and you use the Ollama API behind the scenes?? Why not use an OpenAI compatible endpoint like the rest of the industry?
Locking it to Ollama is stupid. Ollama is just a wrapper for llama.cpp anyways. Literally everyone else running LLMs locally- llama.cpp, vllm (which is what the inference providers use, also I know Deepseek API servers use this behind the scenes), LM Studio (for the causal people), etc all use an OpenAI compatible api endpoint. Not to mention OpenAI, Google, Anthropic, Deepseek, Openrouter, etc all mainly use (or at least fully supports, in the case of Google) an OpenAI compatible endpoint.
If you don’t like this free and open source software that was shared it’s luckily possible to change it yourself…or if it’s not supporting your favorite option you can also just ignore it. No need to call someone’s work or choices stupid.
This is because we haven't updated the demo to the latest version.
> but can it capture the highlighted text snippets and show them in the link details page?
That's a good idea that we might implement later, but at the moment you can only highlight the links[1].
[1]: https://blog.linkwarden.app/releases/2.10#%EF%B8%8F-text-hig...
- Does the web front end support themes? It’s a trivial thing but based on the screenshots, various things about the default theme bug me and it would be nice to be able to change those without a user style extension.
- Does it have an API that would allow development of a native desktop front end?
Yes[1].
> Does it have an API that would allow development of a native desktop front end?
Also yes[2].
[1]: https://blog.linkwarden.app/releases/2.9#-customizable-theme
[2]: https://docs.linkwarden.app/api/api-introduction
I ask because currently I use Readwise but have a local script that syncs the reader files to a local DB, which then feeds into some custom agent flows I have going on on the side.
Pretty easy if you have it in a bookmark html file format.
> Also, if I were using a hosted version, would I be able to eg insert/retrieve files via an API call?
Yup, check out the api documentation:
https://docs.linkwarden.app/api/api-introduction
What I'd really love is a super compact "short-name only" view of links. Just words, not lines or galleries. For super-high content views.
https://blog.linkwarden.app/releases/2.8#%EF%B8%8F-customiza...
I'd also love a separation of human tags and AI tags (even by base or stem), just in case they provided radically different views, but both were useful.
EDIT: Just did a quick look in the documentation, is there a native or supported distinction between links that are like bookmarks and links that are more content/articles/resources?
In any case, nice project, thank you.
Does it grab the DOM from my browser as it sees it? Or is it a separate request? If so, how does it deal with authentication?
It currently stores the full webpages as a single html file, a screenshot, a pdf, a read-it-later view.
Aside from that, you can also send the webpages to the Wayback Machine to take a snapshot.
To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
Just an image? So no full text search?
It'd be awesome to integrate this with the SingleFile extension, which captures any webpage into a self-contained HTML file (with JS, CSS, etc, inlined).
a question arose for me though: if the AI tagging is self hostable as well, how taxing is it for the hardware, what would the minimum viable hardware be?
It’s worth mentioning that you can also use external providers like OpenAI and Anthropic to tag the links for you.
[1]: https://docs.linkwarden.app/self-hosting/ai-worker
Lately I've been using MacOS and I've noticed Chromium-based browsers use more resources than the native Safari. This is especially true with Microsoft Edge, which sometimes consumes tens of gigabytes of RAM (possibly a memory leak?). In an attempt to preserve battery life and SSD longevity, Safari is now my go-to browser on MacOS.
Linkwarden looks nice, too, but when picking an option, I wanted one with a native Android app.
Text to search in the top search bar: RRP
Page that contains that term: https://www.da.vidbuchanan.co.uk/blog/r1-jailbreak.html
Result found: 0
Does this search the content of the archived pages?
For example, we can go to the Wayback Machine at archive.org to not only see what a website looked like in the past, but prove it to someone (because we implicitly trust The Internet Archive). But the Wayback Machine has deleted sites when a site later changes its robots.txt to exclude it, meaning that old site REALLY disappears from the web forever.
The difficulty for a trusted archive solution is in proving that the archived pages weren't altered, and that the timestamp of the capture was not altered.
It seems like blockchain would be a big help, and would prevent back-dating future snapshots, but there seem to be a lot of missing pieces still.
Thoughts?
In some of the case studies Starling (https://www.starlinglab.org/) has published, they've published timestamps of authenticated WACZs to blockchains to prove that they were around at a specific time... More _layers_ of data integrity but not 100% trustless.
It doesn't work yet.
I use singlefile to archive pages I'm viewing Linkding.
Then I have a BeautifulScript4 script to strip the assets.
Then I use Jina's ReaderLM v2 to render the HTML to proper Markdown: https://huggingface.co/jinaai/ReaderLM-v2
Except, of course, for longer table oriented text documents like HN that doesn't work.
I want a plaintext archive of web pages in a github repo or similar. Not a fancy UI/UX
- SingleFile: https://github.com/gildas-lormeau/SingleFile
- Linkding: https://github.com/sissbruecker/linkding
- BeautifulScript4: https://beautiful-soup-4.readthedocs.io/en/latest/ (assumed that was the python library Beautiful Soup 4 and not "Script")
I’ve been considering switching from Raindrop to a self hosted option, but while I like self hosting I’m also leaning towards just paying someone to handle this particular service for me.
I am no longer associated with Russia in any way. It would be great if this information could be added to the article."
Source: https://numericcitizen.me/when-war-in-ukraine-influences-my-...
https://www.linkace.org/ (my fave)
https://github.com/sissbruecker/linkding
https://github.com/jonschoning/espial
https://motd.co/2023/09/postmarks-launch/
https://betula.mycorrhiza.wiki/
https://linkhut.org/
https://grimoire.pro/
https://hoarder.app/
https://mymind.com/
https://github.com/omnivore-app/omnivore
https://github.com/wallabag/wallabag
https://betula.mycorrhiza.wiki/
https://zotero.org/
This one seems to be directly related to the webrecorder project which seems like a pretty full featured warc recorder.
https://readeck.org/en/
I would love to hear how people use this product once they have stored the links!
I always include an archived link whenever I reference something in documentation. That's my main use at the moment.
However, I also feel like I've gotten a lot of really good value when trying to learn a new development topic. Whenever I find something that looks like it might be useful, I archive it and, because everything is searchable, I end up with a searchable index of really high quality content once I actually know what I'm doing.
I find it hard to rediscover content via web search these days and there's so much churn that having a personal archive of useful content is going to increase in value, at least in my opinion.
https://hoarder.app
[1]: https://github.com/linkwarden/linkwarden/issues/246#issuecom...
A couple improvements I'd like: I want drag-and-drop link saving.
If I add a reddit link, it doesn't import the reddit thread title, it uses reddit's title in linkwarden (Reddit - the heart of the internet). Same goes for a few other websites like gitlab.
I'd like an MCP.
Resource usage optimization: while it is smaller than karakeep/hoarder, for me it consumes 500-950MB ram, and I have only 500 links added.
https://github.com/karakeep-app/karakeep
Seems very similar.
[0] http://raindrop.io/
I started using it primarily for images inspiration collecting but it has grown into my "everything" collecting, including bookmarks.
Libraries can be shared via file sharing (e.g. google drive, dropbox), one time purchase price, amazing software design, extensions, and more.
QQ for users: How is the UX compared with ArchiveBox?
My two favorite parts of Readeck are:
- it provides a OPDS catalog of your saved content so you can very easily read things on your e-book reader of choice. I use KOReader on a Kindle and have really enjoyed reading my saved articles in the backyard after work.
- you can generate a share link. I have used this to share some articles behind paywalls with friends and family where before I was copying and pasting content into an email.
There are also other importing formats we do support as well like Wallabag, Omnivore, etc…
True. There used to be an extension that enabled the hidden code path, but that stopped working years ago. I switched to Kiwi browser.
This comment makes a lot of assumptions but does not substantiate any of them, please refrain from dunking on other peoples projects like this, it does not contribute anything to the discussion.
I understood an open source project need revenue to survive, but the reason why this project grew so large is because of the self-hostable nature, and the push of the cloud offering is the opposite of that.
I really hope this is not the first steps towards enshittification...