Subject: [htdig] indexing over and over again
From: Clint Gilders (email@example.com)
Date: Thu Jun 08 2000 - 20:37:13 PDT
Wow is all I can say about ht://dig. No more of those slow clunky
perl scripts that time out. Here's the first of what I imagine will be
the first of a many questions on running htdig on a real big web site.
So ... we are starting to play with htdig. The one thing I can't
seem to find a clear answer on is how to ad new listings to the database
without indexing the whole database again. When I run htdig in
verbose mode without the -i switch and point it to a new URL I notice
that it still retrieves the previously indexed pages from other URLs
but doesn't index them.
We've index our whole site (100,000 ++ pages) and now want to index
some other sites as they are submitted to us, and as we find them, but
with it running this way it takes forever (plus it will make our
bandwidth charges skyrocket).
Should I have it use a different database while digging these new
sites, and then merge it with the original? If so, how would I go
about configuring it (i'm still using the pre-fab rundig and
Server Master Kingsnake.com
To unsubscribe from the htdig mailing list, send a message to
You will receive a message to confirm this.
This archive was generated by hypermail 2b28 : Thu Jun 08 2000 - 17:31:54 PDT